Solar technology Vehicle. (Image:Aptera)

Aptera, a developer of a futuristic-looking, solar-powered passenger car, has pulled the wraps off its first production vehicle, which it says will be rolling off the assembly line this year.

Chat GPT Plus Sbscription Introduced

The ChatGPT Plus Subscription will cost $20 for monthly basis and currently it is accessible only in the US.

AI startup will generate celebrity-like voice

ElevenLabs tweeted that it has witnessed a rise in the number of voice cloning since the tool was launched. The startup also asked Twitter users to provide feedback on how voice cloning can be stopped.

Firms adopt ChatGPT to develop solutions

It’s not that companies are new to using AI-powered chatbots to answer more than just basic customer queries.

Image editing features in Microsoft Edge

The feature comes with tools to make necessary changes like image cropping, adjusting brightness, exposure, saturation, Tint, Shadow and more.

Thursday, September 19, 2019

For, While and Do ....While

. 1) What is looping? Describe “for”, “while” and “do-while” loops with appropriate examples.
Answer:
Looping statements or Iterative Statements
'A loop' is a part of code of a program which is executed repeatedly. A loop is used using condition. The repetition is done until condition becomes condition true. A loop declaration and execution can be done in following ways.
• Check condition to start a loop
• Initialize loop with declaring a variable.
• Executing statements inside loop.
• Increment or decrement of value of a variable.
For loop:
This is an entry controlled looping statement. In this loop structure, more than one variable can be initialized. One of the most important feature of this loop is that the three actions can be taken at a time like variable initialization, condition checking and increment/decrement. The “for” loop can be more concise and flexible than that of while and do-while loops.
Syntax:
for(initialisation; test-condition; incre/decre)
{
statements;
} Example:
#include
#include
main()
{
int a;
clrscr();
for(i=1; i<=5; i++)
{
printf("%d\n",i);
}

getch();
}
while loop:
This is an entry controlled looping statement. It is used to repeat a block of statements until condition becomes true.
Syntax:
while(condition)
{
statements;
increment/decrement;
}
In above syntax, the condition is checked first. If it is true, then the program control flow goes inside the loop and executes the block of statements associated with it. At the end of loop increment or decrement is done to change in variable value. This process continues until test condition satisfies.
Example:
#include
#include
main()
{
int a;
clrscr();
a=1;
while(a<=5)
{
printf("\n %d \n",a);
a++;
}
getch();
}

Do-While loop :
This is an exit controlled looping statement. Sometimes, there is need to execute a block of statements first then to check condition. At that time such type of a loop is used. In this, block of statements are executed first and then condition is checked.
Syntax:
do
{
statements;
(increment/decrement);
}while(condition);
In above syntax, the first the block of statements are executed. At the end of loop, while statement is executed. If the resultant condition is true then program control goes to evaluate the body of a loop once again. This process continues till condition becomes true. When it becomes false, then the loop terminates.
Example:
#include
#include
 main()
{
int a;
a=5;
do
{
printf("%d\n ",a);
a++;
}while("%d\n",a);

getch();
}

Tuesday, September 17, 2019

Programming Language & Its concept

Programming Language & Its concept
1. What is programming language? Explain the types of programming language with its merits and demerits.
Ans: Programming language is an artificial language uses set of instructions with some mathematical notation while writing program is called computer language. The main programming languages are high level, machine level and assembly language.
The types of programming languages are as follows:
a. Low level languageIt is machine dependent programming language uses short abbreviated words and binary digits. Programmers require depth knowledge of computer architecture.
It is further divided into two parts:
i. Machine Level language
It is first generation language and written by using binary digits o (off) and 1 (on). It is machine dependent, time consuming and very difficult as programmers should have sound knowledge of computer architecture.
Merits- It is directly understood by CPU.
- It doesn’t require language processor and takes less time to execute the source code as it is directly understood.
Demerits- It is machine dependent so non portable.
- It is difficult to understand, write and debug.
ii. Assembly LanguageIt is second generation machine dependent but easy to memorize and write program than the machine level language. It uses short abbreviated words like add for addition, sub for subtraction, etc. while writing program is called mnemonics. The language processor, assembler is used to convert the assembly language into computer understandable form.
Merits- It is easier to understand and debug than the machine code.
- It is not time consuming as it uses words and letters.
Demerits- Machine oriented language so it is not portable.
- It is not fast as machine as it requires language processor.
b. High level languageThe set of instructions which are written by using simple phraseless English words with some mathematical notations following a strict syntax discipline is called high level language. It cannot understand by computer directly so compiler and interpreter are used to convert it into machine code which is object code and the original instructions is called source code.
Merits
- Simple English is used for program coding so easy to understand and debug.
- It is machine independent, problem and procedure oriented.
Demerits- It requires language processor so slower than machine.
- It is lower efficiency and flexibility.
It is further classified into three which are:
i. Procedure oriented languageIt is the 3rd generation language oriented towards the procedure for solution. It contains set of vocabulary called keywords and symbols, can be used with English like word while writing program.
ii. Problem oriented languageIt is 4th generation language closer to 5th generation. It doesn’t need step by step instructions, can be used English like words directly what the programmers want for developing their own applications. Example: SQL, PHP, etc.
iii. Natural language
It is the means of communication in between the human beings using Nepali, English, Hindi, Japanese, etc. called natural language. The language is being designed to use for artificial intelligence, natural networks and expert system as fifth generation language. Example: LISP, PROLOG, etc.
2. What is language processor? Explain its types.Ans: The program or software which is used to convert high level language and assembly language into computer understandable form or machine code is called language processor.
Following are the different types of language processor:
a. CompilerIt is a translator or language processor which converts source program written in high level language or 4GL into machine code whole at a time. Compiler reads the complete program at first and if there is syntax error free then it converts source program in object program at once so it is faster than the interpreter. It is large program and complex too so it requires more memory spaces. Example: Javac.exe, TurboC.exe, etc.
b. InterpreterIt is also language processor used to convert source program code written in 4GL or HLL into machine code line by line when CPU gets instructions. Each time program is executed, every line is checked for syntax errors and converted to equivalent machine code. It is simple, easy, takes less memory but cannot be saved for future reference however it is slower and time consuming. Eg. BASIC, LISP, etc.
c. AssemblerAssembler also a language processor which is used to convert the acronym used in assembly language into machine code. It is a little complex and difficult but not like compiler and slower than the compiler and interpreter. Eg. Microsoft Assembler.
3. What is programming? Difference between compiler and interpreter.Ans: Program is a process or technique to create a program including different steps in an organized way using different types of programming language.
Compileri. It translates high level language to machine whole program at a time.
ii. It is slow for debugging.
iii. It creates an object program.
iv. All the errors in the program are reported at once.
v. It takes less time to execute the program.
Interpreteri. It also translate high level language to machine code each statement at a time.
ii. It is faster in debugging.
iii. It does not create object program.
iv. It reports the error at each time.
v. It takes more time for execution the program.
4. What are the two types of programming errors? How are they detected?Ans: Following are the two types of errors:
a. Syntax error: 
Any deviation from the syntax in the expression in a program results in error called syntax error. The syntax errors are detected by compiler during compilation process, and the program terminates by displaying these errors.
b. Semantic error:
The error that is related to logic of the program is called semantic. The semantic error are also called logical errors which can be caused due to a wrong path, elimination of some condition or improper use of operators etc. These types of error can not be detected easily by compiler so after compilation when result will be found and produces wrong result.
Following are the way of finding errors:i. Run time error can be found only when the code is compiled and run, but the result will be invalid which is due to mismatch of data type and difficult to isolate it.
ii. The logical error are displayed due to the wrong path, elimination of condition and improper use of operator and can be found only when it is executed and analyzed the result.
iii. Latency error is also called hidden error that is displayed only when a particular set of data, items are used. For example, let us take statement R = (A+B)/(A-B), in computer calculation, there will be error only when A and B are equal.

Wednesday, September 11, 2019

Generation of Computer


GENERATIONS OF COMPUTERS
Computer generations classification is mainly based on the basic devices used. Also, the considerations are the architecture, languages, modes of operation,  in different periods of time. Based on the period of development and the features incorporated, the computers are classified into different generations- First generation to Fifth generation. This is called the computer generation.
The classification and time periods are given below:
1.) First Generation Computer (1945-1955)
First generation computers were characterized by the vacuum tubes as CPU, magnetic drum for data storage , and machines languages were used for giving instruction. The computer of this generation was very large in size called room-sized computers.
The programming of first generation computers was done in machine languages (0s and 1s). Afterward, assembly languages were developed and used in first generation computer.
Features of first generation computers:
  • Technology used: vacuum tube
  • Machines languages were used to instruct the computer.
  • Magnetic core memory was used as primary memory.
  • Electrostatic tubes, Parer tape, punch card, magnetic tape
  • Punched card, printing devices were used for input/output operations and store the result.
  • It occupies very large space, slow processing, inefficient and unreliable due to low accuracy.
  • Power consumption was very high and it generated much heat.
  • It could only perform straight forward simple numerical calculation.
  • Computer used to be much expensive.
The example of first generation computers is ENIAC, UNIVAC,EDVAC, and EDSAC.
2.) Second Generation Computer (1957-1963)
Second generation computer replaced machine language with assembly language, allowing abbreviated programming codes to replace long, difficult binary codes.
The transistor was developed in this generation. A transistor transfers electric signals across a resistor. A transistor was highly reliable compared to tubes.
The transistor was far more superior in performance on account of their miniature size, smaller power consumption, and heat production rate. The second generation computer used these semiconductor devices.

Some of its features are:
  • Technology used: Transistor
  • Operating speed was in terms of a microsecond.
  • Assembly language and machines independent language such as COBOL (Common Business Oriented Language) and FORTRAN (Formula Translation) were introduced the size of the computer.
  • Magnetic core memory was used as primary memory.
  • Magnetic drum and magnetic tape were used as secondary memory.
  • Power required to operate them was low.
  • It could perform scientific calculation such as solving differential equations.
  • Storage capacity and use of computers are increased.
3.) Third Generation Computer (1964-1971)
Third generation computer replaced transistor  with an integrated circuit known popularly as chips. Scientist managed to fit many components on a single chip. As a result, the computer became ever smaller as more components were squeezed on the chip.

Magnetic disks began to replace magnetic tape for auxiliary and video display terminals were introduced for the output of data. Keyboards were used for the input of data. A new operating system was introduced for automatic processing and multi-programming.

These computers were highly reliable, relatively expensive and faster. High-level programming languages continued to be a developer. The example of third generation computers is IBM-360 series, ICL-900 series, and Honeywell 200 series.
Features of the third generation computers are:
  • The technology used: IC (Integrated Circuit).
  • Transistors were replaced by IC in their electronic circuitry.
  • High-level languages like FORTAN , BASIC and other are used to develop programs.
  • Semiconductor memory like RAM and ROM were used as primary memory.
  • Monitor and keyboard were introduced for data input and output respectively.
  • Multiprogramming facility was developed.
  • The computer was used in census calculation, military, banks and industries.
  • Size, cost, power requirement and heat generation decreased.
  • Processing speed and storage capacity used of computer increased.
4.) Fourth Generation Computer (1972 onward)
The invention of microprocessor chip marked the beginning of the fourth generation computers. Semiconductor memories replaced magnetic core memories. The invention of microprocessors led to the development of microcomputer or the personal computer.
The first microprocessor called Intel 4004 was developed by American Intel Corporation 1971.
This computer has faster generation language and application software for microcomputers became popular and allowed home and business users to adapt their computers for word processing, spreadsheet manipulating, file handing and graphics.
In this generation, the concept of computer networks and CD-ROMs came into existence.
Features of the fourth generation computer are:
  • Technology in use: VLSI is introduced and used Microprocessor-based technology.
  • Problem-oriented fourth generation language (4GL) is used to develop the program.
  • Semiconductor like RAM, ROM and cache memory is used as a primary memory.
  • Magnetic disks like hard disk, optical disk (CD,DVD), Blue-ray disk, flashes memory (memory chip, pen drive) are used as secondary memory.
  • E-mail, Internet and mobile communication are developed.
  • Advanced, user-friendly, web page software are developed.
  • Size, cost, power requirement, heat generation decreased compared to the previous generation.
  • Operating speed, storage capacity ,use of computer increased compared to the previous generation
The example of the fourth generation computer is IBM-PC, HP laptops, Mac notebook etc.
5) FIFTH GENERATION COMPUTERS
Fifth generation computers are in developmental stage which is based on the artificial intelligence. The goal of the fifth generation is to develop the device which could respond to natural language input and are capable of learning and self-organization. Quantum computation and molecular and nanotechnology will be used in this technology. So we can say that the fifth generation computers will have the power of human intelligence.
CHARACTERISTICS
1) The fifth generation computers will use super large scale integrated chips.
2) They will have artificial intelligence.
3) They will be able to recognize image and graphs.
4) Fifth generation computer aims to be able to solve highly complex problem including decision making, logical reasoning.
5) They will be able to use more than one CPU for faster processing speed.
6) Fifth generation computers are intended to work with natural language.


Monday, September 9, 2019

Electronic Computer Era


# Explain the Electronic computer era in details.
  • The Electronic Computer Era
The computers of this age are developed by using electronic components like a vacuum tube, transistors IC, VLSI, etc. These computers are smaller, faster and more reliable.
  • The ENIAC (1943-1946)
In 1946, John W. Mauchly and J.presper Eckert constructed ENIAC (Electronic Numerical Integrated and Calculator), at the Moore School of Engineering of the University of Pennsylvania. USA ENIAC was the first popular general purpose all electronic digital computers. John Von Neumann was the consultant of the ENIAC project.
It was a very large machine weighing about 30 tons and containing about 17,468 vacuum tubes, 70,000 resistors, 5 million soldered joints and it consumed 160 kilowatts.
  • The EDVAC (1946-1952)
EDVAC (Electronic Discrete Variable Automatic Computer) was developed by Dr.John Von Neumann,and a member of the Moore School of Engineering of the Unversity of Pennslyvania,J.P Eckert, and J.W Mauchly. The EDVAC is used for more school personnel and the Ballistics Research Laboratory of the US Army,which was based on Jhon Von Neumann`s ideas of Stored Program".

·        EDSAC
EDSAC, in full Electronic Delay Storage Automatic Calculator, the first full-size stored-program computer, built at the University of Cambridge, by Maurice Wilkes and others to provide a formal computing service for users. EDSAC was built according to the principles of  Hungarian American scientist John von Neumann. Wilkes built the machine chiefly to study computer programming issues, which he realized would become as important as the hardware details.
  • The UNIVAC (1951)
UNIVAC (Universal Automatic Computer) was developed by J.P.Eckert and J. Mauchly in 1951. It was the first computer manufactured for commercial use and general purpose digital computer.It was designed to handle both numeric and textual information. Before this, all the computers were either used for defense or census was by General Electrical Corporation in 1954.



Tuesday, September 3, 2019

Electro-Mechanical Era


2.) Explain The Electro-Mechanical Era in chronological order.

The calculator of this age was developed by using mechanical and electronic component vacuum tube.
Successful general purpose mechanical computers were built, in the 1930s. Konrad Zuse developed mechanical computer, the Z1, in 1983 in Germany.
  • The Mark I Computer (1937 - 1944)
A Professor of Physics, Howard H. Aiken designed a general purpose mechanical computer at Harvard University and IBM Automatic Sequence Controlled Calculator (IBM ASCC). It was the first fully automatic calculating machine and later as Harvard Mark I.
It used binary numbers for its operation. Later, Mark II was invented by Aiken and his colleagues that were working electromechanical relays for its operation. Mark II used 19000 valves.
  • The Mark II Computer
It used about 18 thousand vacuum tubes as the main memory device with 7 lakes 50 thousand parts. It is 51 feet long, 8 feet height and 3 feet wide as bulky in size.It was capable of performing five basic arithmetic operations; additions, subtraction, multiplication, division and table reference. The result was printed at the rate of one result per five seconds.
  • The Atanasoff-Berry Computer (1939 - 1942)
In 1939, John Vincent Atanasoff and Clifford Berry designed Atanasoff-Berry computer or ABC solving systems of mathematical simultaneous equation. It used 18000 valves and other 45 valves for internal logic and capacitors for storage.

It used punch cards as input and output operation i.e. secondary. It is considered as the first computing machine which introduced the idea of binary arithmetic, regenerative memory and logic circuits.
  • The Colossus (1941 - 1944)
In 1944, Colossus computer is designed by Alan M. Turing and build by British mathematician Alan Mathison Neuman, Alan with some colleagues, creates a computer named colossus at the University of Manchester, England, which comprised 1800 vacuum tubes.
It was one of the world's earliest working programmable electronic digital computers. Colossus was a special purpose machine that suited a narrow range of tax (for example, it was capable of performing decimal multiplication).

Sunday, September 1, 2019

Mechanical Era of computer History

1. Describe The Mechanical Era (zeroth generation) of computer history in chronological order.
Ancient people lived on the earth for centuries without counting. Then, they started to count their ten fingers. It becomes so difficult to live and to remember anything. These phenomena were gradually replaced by the use of stones, counting notches on sticks or marks on walls. The different generations are described below which has helped the humans for keeping records with the passing of time.

1.) The Mechanical Era (Zeroth generation)

The calculator of this age was developed by using mechanical components like wood, metal, stone, bone, etc. It was used for simple mathematical calculations. Some of the popular calculations used in this age are:
  • Abacus
In ancient period, it was used to calculate mathematical calculation. It was used for performing simple calculation like counting, addition, subtraction and multiplication of number. An abacus consists of a rectangular frame carrying a number of wooden rods. Mid-bar divides each of these rods in top unequal -upper and lower parts.

The upper part is called heaven, whereas lower part is called earth. The heaven consists of two beads, whereas the earth part consists of five beads to each rod. The value of bead on heaven part is five and earth is one. Each abacus consists of nine or eleven or thirteen rods.
  • Napier Bone [John Napier (1550-1617 AD)]
THE SCOTTISH mathematician John Napier first published the table of logarithms in 1614 AD. It was very used and consists of a large number of calculations.
He invented bone rods and used bones to demonstrate by subtracting and multiplication by addition according to his principle. These are made of strips of bones on which numbers were carved (imprinted) and painted that`s why it is also called Napier's bone.
  • Slide Rule [William Oughtred (1575-1660AD)]
It is a rectangular device-slide-rule. It was a calculating device based on the principle of a log. A rule consists of two graduated scales, one of which slips upon other. It is divided in such a way that suitable alignment of one scale against the other makes it possible to find products and quotient of any numbers.
  • Pascaline [Blaise Pascal (1623-1662AD)]
The French mathematician and philosopher Blaise Pascal, invented first arithmetic calculator which could do addition and subtraction, with numbers being entered by manipulating its dials and operation could be done by using gars. Pascal invented the machine for his father, a tax collector, so it was the first business machine too.
  • Stepped Reckoner [Baron Gottfried Wilhelm Von Leibniz (1646-1716AD)]
A German mathematician and philosopher invented a first calculator, Stepped Reckoner, which was able to perform automatic addition, subtraction, multiplication, division and square root too. Each with nine teeth of varying lengths instead of wheels it was called 'Leibniz Calculator' or 'Stepped Reckoner'.
  • Jacquard Loom [Joseph Marie Jacquard (1752-1834 AD)]
The French man, Joseph Marie Jacquard, was a textile manufacturer who invented a mechanism for automated weaving clothes for the textile industry at Lyon, in 1802 AD. This machine was used to automatically control weaving looms to facilitate the production of weaving cloth with complex patterns.
  • Charles Babbage (1791-1871 AD)
The English Professor and Mathematician, Charles Babbage, invented the Different Engine at Cambridge University, in 1822 AD. This machine could solve differential equations and calculate various mathematical functions.
He also gave design like today’s modern computer which was based on the principle of Input, Process, Output and Mill but unable to complete it due to lack of money and electronic devices, called " Analytical Engine ".
As he contributed the inventions like analytical engine and difference engine like today’s modern computer, so he is known as father of modern computer science.
  • Lady Augusta Ada Byron Lovelace (1815-1852 AD)
The English intelligent and independent-minded woman, Lady Augusta, was a daughter of English poet Lord Byron and a very Great follower, assistant of Charles Babbage. Lady documents Babbage`s work and writes programs for Babbage.
This plan is now regarded as the first computer program. That`s why, she was considered the first computer programmer and a software language developed by the US Defense Department, was named ‘ADA’ in her honor.
  • Tabulating Machine [Herman Hollerith (1860-1929 AD)]
An American Inventor, Herman Hollerith, also applied the Jacquard loom concept in computing and applies for patents for an automatic punch-card tabulating machine. He invented a machine known as " Tabulating Machine ". This device could process on the punch cards and perform census calculating faster than ever before.
  • John Von Neumann (1903-1975 AD)
The Hungarian Mathematician, John gave an idea of stored program computer in the sense that program is stored internally in the main memory of the computer along with its associated data, in 1945. So, he is called the "Father of Stored Program". Before that, program required for the computer were integrated and written permanently in chips. So, modification of program was not possible. But, after Neumann, such programs were stored on a computer in some storage media, so modification was easy and flexible.