Difference Between Sequential And Concurrent

Posted on
Difference Between Sequential And Concurrent Rating: 5,9/10 4441 reviews

Deciding between Sequential and Concurrent Tasks in Engineering Design Robert P. Smith and Steven D. Eppinger Concurrent Engineering. Vol 6, Issue 1, pp. Deciding between Sequential and Concurrent Tasks in Engineering Design. Smith and Steven D. What is the difference between concurrent programming and parallel programing? I asked google but didn't find anything that helped me to understand that difference. Could you give me an example for. Similar to the concurrent-relay race case, in the CE case, it is assumed that the concurrent work-groups can start working on their portion of the work much before the previous work-group is finished.

What is the difference between concurrent programming and parallel programing? I asked google but didn't find anything that helped me to understand that difference. Could you give me an example for both?

For now I found this explanation: http://www.linux-mag.com/id/7411 - but 'concurrency is a property of the program' vs 'parallel execution is a property of the machine' isn't enough for me - still I can't say what is what.

matekmmatekm

14 Answers

If you program is using threads (concurrent programming), it's not necessarily going to be executed as such (parallel execution), since it depends on whether the machine can handle several threads.

Here's a visual example. Threads on a non-threaded machine:

Threads on a threaded machine:

The dashes represent executed code. As you can see, they both split up and execute separately, but the threaded machine can execute several separate pieces at once.

Tor ValamoTor Valamo

Concurrent programming regards operations that appear to overlap and is primarily concerned with the complexity that arises due to non-deterministic control flow. The quantitative costs associated with concurrent programs are typically both throughput and latency. Concurrent programs are often IO bound but not always, e.g. concurrent garbage collectors are entirely on-CPU. The pedagogical example of a concurrent program is a web crawler. This program initiates requests for web pages and accepts the responses concurrently as the results of the downloads become available, accumulating a set of pages that have already been visited. Control flow is non-deterministic because the responses are not necessarily received in the same order each time the program is run. This characteristic can make it very hard to debug concurrent programs. Some applications are fundamentally concurrent, e.g. web servers must handle client connections concurrently. Erlang is perhaps the most promising upcoming language for highly concurrent programming.

Parallel programming concerns operations that are overlapped for the specific goal of improving throughput. The difficulties of concurrent programming are evaded by making control flow deterministic. Typically, programs spawn sets of child tasks that run in parallel and the parent task only continues once every subtask has finished. This makes parallel programs much easier to debug. The hard part of parallel programming is performance optimization with respect to issues such as granularity and communication. The latter is still an issue in the context of multicores because there is a considerable cost associated with transferring data from one cache to another. Dense matrix-matrix multiply is a pedagogical example of parallel programming and it can be solved efficiently by using Straasen's divide-and-conquer algorithm and attacking the sub-problems in parallel. Cilk is perhaps the most promising language for high-performance parallel programming on shared-memory computers (including multicores).

Jon HarropJon Harrop

Concurrent = Two queues and one coffee machine.

Parallel = Two queues and two coffee machines.

GKislinGKislin

Interpreting the original question as parallel/concurrent computation instead of programming.

In concurrent computation two computations both advance independently of each other. The second computation doesn't have to wait until the first is finished for it to advance. It doesn't state however, the mechanism how this is achieved. In single-core setup, suspending and alternating between threads is required (also called pre-emptive multithreading).

In parallel computation two computations both advance simultaneously - that is literally at the same time. This is not possible with single CPU and requires multi-core setup instead.

versus

According to: 'Parallel vs Concurrent in Node.js'.

Community
pspipspi

In the view from a processor, It can be described by this pic

mohsen.nourmohsen.nour

I believe concurrent programming refers to multithreaded programming which is about letting your program run multiple threads, abstarcted from hardware details.

Parallel programming refers to specifically designing your program algorithms to take advantage of available parallel execution. For example, you can execute in parallel two branches of some algorithms in expectation that it will hit the result sooner (on average) than it would if you first checked the first then the second branch.

user151323

I found this content in some blog. Thought it is useful and relevant.

Concurrency and parallelism are NOT the same thing. Two tasks T1 and T2 are concurrent if the order in which the two tasks are executed in time is not predetermined,

T1 may be executed and finished before T2,T2 may be executed and finished before T1,T1 and T2 may be executed simultaneously at the same instance of time (parallelism),T1 and T2 may be executed alternatively,..If two concurrent threads are scheduled by the OS to run on one single-core non-SMT non-CMP processor, you may get concurrency but not parallelism. Parallelism is possible on multi-core, multi-processor or distributed systems.

Concurrency is often referred to as a property of a program, and is a concept more general than parallelism.

Source: https://blogs.oracle.com/yuanlin/entry/concurrency_vs_parallelism_concurrent_programming

loknathloknath

Difference Between Sequential And Concurrent Backups

They're two phrases that describe the same thing from (very slightly) different viewpoints. Parallel programming is describing the situation from the viewpoint of the hardware -- there are at least two processors (possibly within a single physical package) working on a problem in parallel. Concurrent programming is describing things more from the viewpoint of the software -- two or more actions may happen at exactly the same time (concurrently).

The problem here is that people are trying to use the two phrases to draw a clear distinction when none really exists. The reality is that the dividing line they're trying to draw has been fuzzy and indistinct for decades, and has grown ever more indistinct over time.

What they're trying to discuss is the fact that once upon a time, most computers had only a single CPU. When you executed multiple processes (or threads) on that single CPU, the CPU was only really executing one instruction from one of those threads at a time. The appearance of concurrency was an illusion--the CPU switching between executing instructions from different threads quickly enough that to human perception (to which anything less than 100 ms or so looks instantaneous) it looked like it was doing many things at once.

The obvious contrast to this is a computer with multiple CPUs, or a CPU with multiple cores, so the machine is executing instructions from multiple threads and/or processes at exactly the same time; code executing one can't/doesn't have any effect on code executing in the other.

Now the problem: such a clean distinction has almost never existed. Computer designers are actually fairly intelligent, so they noticed a long time ago that (for example) when you needed to read some data from an I/O device such as a disk, it took a long time (in terms of CPU cycles) to finish. Instead of leaving the CPU idle while that happened, they figured out various ways of letting one process/thread make an I/O request, and let code from some other process/thread execute on the CPU while the I/O request completed.

So, long before multi-core CPUs became the norm, we had operations from multiple threads happening in parallel.

That's only the tip of the iceberg though. Decades ago, computers started providing another level of parallelism as well. Again, being fairly intelligent people, computer designers noticed that in a lot of cases, they had instructions that didn't affect each other, so it was possible to execute more than one instruction from the same stream at the same time. One early example that became pretty well known was the Control Data 6600. This was (by a fairly wide margin) the fastest computer on earth when it was introduced in 1964--and much of the same basic architecture remains in use today. It tracked the resources used by each instruction, and had a set of execution units that executed instructions as soon as the resources on which they depended became available, very similar to the design of most recent Intel/AMD processors.

But (as the commercials used to say) wait--that's not all. There's yet another design element to add still further confusion. It's been given quite a few different names (e.g., 'Hyperthreading', 'SMT', 'CMP'), but they all refer to the same basic idea: a CPU that can execute multiple threads simultaneously, using a combination of some resources that are independent for each thread, and some resources that are shared between the threads. In a typical case this is combined with the instruction-level parallelism outlined above. To do that, we have two (or more) sets of architectural registers. Then we have a set of execution units that can execute instructions as soon as the necessary resources become available. These often combine well because the instructions from the separate streams virtually never depend on the same resources.

Then, of course, we get to modern systems with multiple cores. Here things are obvious, right? We have N (somewhere between 2 and 256 or so, at the moment) separate cores, that can all execute instructions at the same time, so we have clear-cut case of real parallelism--executing instructions in one process/thread doesn't affect executing instructions in another.

Well, sort of. Even here we have some independent resources (registers, execution units, at least one level of cache) and some shared resources (typically at least the lowest level of cache, and definitely the memory controllers and bandwidth to memory).

To summarize: the simple scenarios people like to contrast between shared resources and independent resources virtually never happen in real life. With all resources shared, we end up with something like MS-DOS, where we can only run one program at a time, and we have to stop running one before we can run the other at all. With completely independent resources, we have N computers running MS-DOS (without even a network to connect them) with no ability to share anything between them at all (because if we can even share a file, well, that's a shared resource, a violation of the basic premise of nothing being shared).

Every interesting case involves some combination of independent resources and shared resources. Every reasonably modern computer (and a lot that aren't at all modern) has at least some ability to carry out at least a few independent operations simultaneously, and just about anything more sophisticated than MS-DOS has taken advantage of that to at least some degree.

The nice, clean division between 'concurrent' and 'parallel' that people like to draw just doesn't exist, and almost never has. What people like to classify as 'concurrent' usually still involves at least one and often more different types of parallel execution. What they like to classify as 'parallel' often involves sharing resources and (for example) one process blocking another's execution while using a resource that's shared between the two.

People trying to draw a clean distinction between 'parallel' and 'concurrent' are living in a fantasy of computers that never actually existed.

Jerry CoffinJerry Coffin
  • Concurrent programming is in a general sense to refer to environments in which the tasks we define can occur in any order. One task can occur before or after another, and some or all tasks can be performed at the same time.

  • Parallel programming is to specifically refer to the simultaneous execution of concurrent tasks on different processors. Thus, all parallel programming is concurrent, but not all concurrent programming is parallel.

Source: PThreads Programming - A POSIX Standard for Better Multiprocessing, Buttlar, Farrell, Nichols

snrsnr

In programming, concurrency is the composition of independently executing processes, while parallelism is the simultaneous execution of (possibly related) computations.
- Andrew Gerrand -

And

Concurrency is the composition of independently executing computations. Concurrency is a way to structure software, particularly as a way to write clean code that interacts well with the real world. It is not parallelism.

Concurrency is not parallelism, although it enables parallelism. If you have only one processor, your program can still be concurrent but it cannot be parallel. On the other hand, a well-written concurrent program might run efficiently in parallel on a multiprocessor. That property could be important..
- Rob Pike -

To understand the difference, I strongly recommend to see this Rob Pike(one of Golang creators)'s video. Concurrency Is Not Parallelism

Jinbom HeoJinbom Heo

Parallel programming happens when code is being executed at the same time and each execution is independent of the other. Therefore, there is usually not a preoccupation about shared variables and such because that won't likely happen.

However, concurrent programming consists on code being executed by different processes/threads that share variables and such, therefore on concurrent programming we must establish some sort of rule to decide which process/thread executes first, we want this so that we can be sure there will be consistency and that we can know with certainty what will happen. If there is no control and all threads compute at the same time and store things on the same variables, how would we know what to expect in the end? Maybe a thread is faster than the other, maybe one of the threads even stopped in the middle of its execution and another continued a different computation with a corrupted (not yet fully computed) variable, the possibilities are endless. It's in these situations that we usually use concurrent programming instead of parallel.

sharp_c-tudentsharp_c-tudent

Classic scheduling of tasks can be serial, parallel or concurrent.

  • Serial: tasks must be executed one after the other in a known tricked order or it will not work. Easy enough.

  • Parallel: tasks must be executed at the same time or it will not work.

    • Any failure of any of the tasks - functionally or in time - will result in total system failure.
    • All tasks must have a common reliable sense of time.

    Try to avoid this or we will have tears by tea time.

  • Concurrent: we do not care. We are not careless, though: we have analysed it and it doesn't matter; we can therefore execute any task using any available facility at any time. Happy days.

Often, the available scheduling changes at known events which we call a state change.

People often think this is about software, but it is in fact a systems design concept that pre-dates computers; software systems were a little slow in the uptake, very few software languages even attempt to address the problem. You might try looking up the transputer language occam if you are interested.

Succinctly, systems design addresses the following:

  • the verb - what you are doing (operation or algorithm)
  • the noun - what you are doing it to (data or interface)
  • when - initiation, schedule, state changes
  • how - serial, parallel, concurrent
  • where - once you know when things happen, you can say where they can happen and not before.
  • why - is this the way to do it? Are there other ways, and more importantly, a better way? What happens if you don't do it?

Good luck.

DonDon

I understood the difference to be:

1) Concurrent - running in tandem using shared resources2) Parallel - running side by side using different resources

So you can have two things happening at the same time independent of each other, even if they come together at points (2) or two things drawing on the same reserves throughout the operations being executed (1).

JonathanJonathan

Although there isn’t completeagreement on the distinction between the terms parallel and concurrent,many authors make the following distinctions:

  • In concurrent computing, a program is one in which multiple tasks can be in progress at any instant.
  • In parallel computing, a program is one in which multiple tasks cooperate closelyto solve a problem.

So parallel programs are concurrent, but a program such as a multitasking operating system is also concurrent, even when it is run on a machine withonly one core, since multiple tasks can be in progress at any instant.

Source: An introduction to parallel programming, Peter Pacheco

zbszbs

protected by CommunityMay 11 '17 at 2:59

Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?

Not the answer you're looking for? Browse other questions tagged language-agnosticparallel-processingconcurrency or ask your own question.

I got familiar with a little bit of Verilog at school and now, one year later, I bought a Basys 3 FPGA board. My goal is to learn VHDL.

I have been reading a free book called 'Free Range VHDL' which assists greatly in understanding the VHDL language. I have also searched through github repos containing VHDL code for reference.

My biggest concern is the difference between sequential and concurrent execution. I understand the meaning of these two words but I still cannot imagine why we can use 'process' for combinational logic (i.e. seven segment decoder). I have implemented my seven segment decoder as conditional assignment of concurrent statements. What would be the difference if I implemented the decoder using process and a switch statement? I do not understand the word sequential execution of process when it comes to combinational logic. I would understand it if it was a sequential machine-a state machine.

Can somebody please explain this concept?

Here is my code for a seven-segment decoder:

Thank you,

Jake Hladik

jakeh12
jakeh12jakeh12

1 Answer

My biggest concern is difference between sequential and concurrent execution. I understand the meaning of these two words but I still cannot imagine why we can use 'process' for combinational logic (ex. seven segment decoder).

You are confounding two things:

  • The type of logic, which can be sequential or combinational.
  • The order of execution of statements, which can be sequential or concurrent.

Types of logic

In logic design:

  • A combinational circuit is one that implements a pure logic function without any state. There is no need for a clock in a combinational circuit.
  • A sequential circuit is one that changes every clock cycle and that remembers its state (using flip-flops) between clock cycles.

The following VHDL process is combinational:

We know it is combinational because:

Top 10 free software sites. May 30, 2018 - FileHippo. FileHippo is a one-stop destination for all your software needs. Softonic was launched in 1997 and it is probably one of the oldest free software download website. CNET is one of the most popular sites to download free software. Download Crew. No site can ever be 100% safe, but these download sites are generally clean and worthwhile. Download Crew. The Top 15 Free Software Download Websites On The Web. List of Best Software Download Sites. 1) Download.com. 10) Softonic.com. Get now the Best cracked software sites, including ExtraTorrent, The Pirate Bay. Extratorrent is a free website to download and seed torrent files for various. Of more than 10 million torrents of movies, games, music, ebooks, and software.

  • It does not have a clock.
  • All its inputs are in its sensitivity list (the parenthesis after the process keyword). That means a change to any one of these inputs will cause the process to be re-evaluated.

The following VHDL process is sequential:

We know it is sequential because:

Sequential design dan concurrent engineering
  • It is only sensitive to changes on its clock (clk).
  • Its output only changes value on a rising edge of the clock.
  • The output value of z depends on its previous value (z is on both sides of the assignment).

Model of Execution

To make a long story short, processes are executed as follow in VHDL:

  • Statements within a process are executed sequentially (i.e. one after the other in order).
  • Processes run concurrently relative to one another.

Processes in Disguise

So-called concurrent statements, essentially all statements outside a process, are actually processes in disguise. For example, this concurrent signal assignment (i.e. an assignment to a signal outside a process):

Is equivalent to this process:

That is, it is equivalent to the same assignment within a process that has all of its inputs in the sensitivity list. And by equivalent, I mean the VHDL standard (IEEE 1076) actually defines the behaviour of concurrent signal assignments by their equivalent process.

What that means is that, even though you didn't know it, this statement of yours in hex_display_decoder:

is already a process.

Which, in turn, means

What would be the difference if I implemented the decoder using process and a switch statement?

None at all.

Philippe AubertinPhilippe Aubertin

Not the answer you're looking for? Browse other questions tagged concurrencyparallel-processingvhdlexecutionsequential or ask your own question.