computer learning

Programming

Programming

Programming is the process of writing instructions and

directing commands to a computer or any other device such as DVD readers or audio and video receivers in modern communication systems, to direct this device and inform it how to deal with data or how to perform a series of required actions called an algorithm.

The programming process follows rules specific to the language chosen by the programmer. Each programming language has its own characteristics that distinguish

it from the other and make it suitable to varying degrees for each type of program and according

to the task required of this program. Programming languages ​​also have common characteristics and common borders, given that all of these languages ​​were designed to interact with the computer. Programming languages ​​(Software) evolve with the development of computer hardware (hardware).

When the computer was invented in the forties and fifties of the last

century (after electrical arithmetic devices in the 1920s) – and the computer was working

with large numbers of electronic valves – the programming language was also complex, so that it was a series of numbers entered only by zero (0) and one (1) This is because the computer understands only two cases, the presence of current (1) or its absence (0), and this was difficult for programmers. But with the invention of the transistor, the size of the computer was greatly reduced and its capabilities increased. At the same time, specialists were able to invent languages ​​that were easier to use, and programming languages ​​became widely understandable to specialists. The development and facilitation is still in progress and these languages ​​that are easy to handle for programmers are called high-level languages.

Computer programming: It is the process of writing,

testing, debugging and developing the source code of a computer program carried out by humans. Programming

aims to create programs that implement and implement algorithms that have a specific behavior in the sense that they have a pre-defined function and expected results. This process is done using a programming language. The goal of programming is to create a program that performs specific operations or exhibits a specific desired behavior. In general, programming is a process that requires knowledge in various fields, including knowledge of mathematics, logic, and algorithms.

Programming History

Programmable devices have been around since at least 1206 AD when an automatic butcher was programmable via pegs and keels to play various rhythms

and drum patterns; The 1801 jacquard loom could produce different waves by changing its programming – a series of chipboard cards with holes punched in them.

However, the first computer program dates back to 1843

In the 1880s, Hermann Hollerith invented the concept of storing data in a machine-readable form. Later a control board (connection board) added to his 1906 Type I

They were also the first electronic computers.

Machine code was the language of early software and was

written in the instruction set of a given machine, often in binary encoding. Assembly languages

​​were soon developed which allow the programmer to specify instructions in text format (eg ADD X TOTAL) with abbreviations for each opcode and meaningful names to identify addresses. However, since assembly language is nothing more than a different machine

language encoding, any two machines with different instruction sets also have different assembly languages.

High-level programming languages ​​have made the program development process simpler and more understandable. Several languages ​​were developed after it – notably COBOL for business data processing and Lisp for computer research.

by the late 1960s, data storage devices and computers became inexpensive enough that programs could

Programming languages

​​It is worth mentioning here the meaning of the word language

, which is the method of communication and understanding between people or, in the case of a computer, the way a computer understands a human request. Therefore, we find in our lives a set of terms and words that vary in use according to need. Different programming languages ​​have this feature as well. There are many programming languages ​​that exist and these languages ​​differ in terms of their work and purpose, but in the end all of these languages ​​are translated

into machine language 0 and 1.

Therefore, the programmer must be familiar with some programming languages ​​and know what is the appropriate language to implement this program.

The only programming language that a computer understands and can handle is machine language. In the beginning, programmers worked on analyzing the

computer code – machine code –

and dealing with it in its rigid and incomprehensible form, which is (0,1). But this process is very complex and difficult

to deal with because it is not clearly understood by humans and its ambiguity. Therefore, high-end languages ​​were created that act as an intermediary between human language and

machine language, which is Assembly language, and then developed into high-level languages ​​such as C and BASIC. Programs written in these languages ​​are then run by a specialized program such as a translator and a compiler. These programs work to translate programming language

lines into computer language, making it easier for the computer to implement these commands and produce clear implementation results.

Modern Programming

Quality Requirements

For each approach in the software development process, the final program must achieve intrinsic characteristics, such as:

Reliability: It is the number of times the program’s results are correct. This depends on the theoretical accuracy of the algorithms, and the minimization of programming

errors such as errors in resource management (eg overflows in cache spaces) and logical errors (eg division by zero).

Durability: The extent to which the program anticipates problems due to errors (not bugs). This includes situations such as incorrect, inappropriate or corrupt data,

unavailability of needed resources such as memory, operating system services, network connections, user error, and unexpected power outages.

Validity: The working environment of the program: the ease with

Portability: The set of underlying computer hardware and operating systems that can compile/interpret and run a program’s source code. This depends on differences in the programming facilities offered by different platforms including hardware and operating system resources,

expected behavior of the device and operating system, and the availability of platform-specific compilers (and sometimes libraries) for the source code language.

by its current or future developers in order to make improvements, customizations, fix bugs and security holes, or adapt it to new environments. Good practices during initial development make a difference in this regard. This quality may not be directly apparent to the end user but it may greatly affect the software’s long-term fate.

Efficiency/Performance: Measure the system resources a program consumes

(processor time, memory space, slow devices such as disks, network bandwidth and even user interaction to some extent): the less the better. This also includes careful resource management eg cleaning temporary files and eliminating memory leaks.

Ability to read source code

In computer science, ability to read refers to how easy

it is for a human reader to understand the purpose, flow, and process of the source code. Affects the quality aspects mentioned above, including portability, shelf life, and most importantly, maintainability.

Readability is important because programmers spend most of their

time reading trying to understand and modify existing source code rather than writing new source code. Unreadable code often leads to errors, inefficiencies, and duplicate code. A study found that some simple shifts in readability made the code shorter and drastically reduced

the time to understand it.

Following a consistent programming style often helps with reading. However, reading is more than just programming. There are many factors that have little or nothing

to do with a computer’s ability to efficiently compile and execute code that contribute to readability.

 Some of these factors include:

Factors that affect readability:

Indentation style Commenting

Naming

agreements Segmentation

Aspects

been developed with the aim of solving readability concerns by adopting unconventional ways of structuring and displaying code. Integrated development environments (IDEs) aim to integrate all of these aids. Techniques like refactoring code can greatly enhance readability.

Algorithmic complexity

The academic field and engineering practice of computer programming are largely concerned with discovering and implementing the most efficient algorithms for a given class of problems.

For this purpose, algorithms are categorized in orders using the so-called Big O notation, which expresses resource usage such as execution time or memory consumption in terms of input size. Expert programmers are familiar with a variety of well-established algorithms and their respective intricacies and use this knowledge to choose the algorithms most appropriate for the conditions.

Programming Chess algorithms as an example

The “computer programming to play chess” in the 1950s was a paper with the “minimax” algorithm, which is part of the history of complex algorithms.

The “IBM Deep Blue” (Computer Chess) course is part of the Stanford University Computer Science Division’s educational program.

Programming

Programming Methodologies

The first step in most formal software development processes is requirements analysis followed by testing to determine value modeling, implementation, and debugging. There are a lot of different approaches to each of these tasks. One common technique for requirement analysis is to use case analysis. . There are many approaches to the software development process.

Common modeling techniques include analyzes and designs for object orientation (OOAD) or model-based engineering (MDA).

Measuring Language Usage

It is very difficult to identify the most popular modern programming languages. Methods for measuring a programming language’s popularity include: counting the number of job postings mentioning the language the number of books sold and courses taught the language (this overestimates the importance of newer languages) and estimates of the number of current lines of code written in the language (this underestimates the number of users of working languages ​​such as COBOL).

Some languages ​​are very popular for certain types of applications while some languages ​​are used regularly to write many different types of applications. For example, COBOL is still powerful in corporate data centers mostly on large computers, Fortran in engineering applications, scripting languages ​​in web development, and C in embedded software. Many applications use a mixture of several languages ​​in their construction and use. New languages ​​are generally built around a previous language with new functionality added (eg C++ adds object orientation (OPP) to C and Java adds memory management and bytecode to C++ and as a result loses efficiency and ability to manipulate data at a low level)

Programming debugging

Debugging is very important in the software development process as the presence of defects in a software can have serious consequences for its users. Some languages ​​are more prone to some types of errors because their specifications do not require compilers to check as much as other languages. Using a static code analysis tool can help detect some potential issues. Usually the first step in debugging is to try to reproduce the problem. This could be a non-trivial task for example with parallel operations or some unusual code bug. Also, the user environment and usage history can make it difficult to reproduce the problem.

After the defect is reproduced, it may be necessary to simplify the program entry to facilitate its correction. For example a compiler error can cause it to crash when parsing some large source files. However after simplifying the test case only a few lines from the original source file can be enough to reproduce the same crash. This simplification can be done manually using the divide and conquer approach. The programmer will try to remove some parts of the original test case and check if the problem persists. When debugging the problem in the GUI the programmer can try to skip some user interactions from the original problem description and see if the remaining actions are enough for the errors to appear.

Leave a Reply

Your email address will not be published.

Back to top button