Search Results

Search found 905 results on 37 pages for 'signals slots'.

Page 8/37 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • shmat() sending signal 11

    - by Sachin Chourasiya
    Please let me know when a call to shmat function can raise a signal 11. if ((shmId=shmget(shmKey, 0, 0)) <= 0) { printf( "shmget(0x%x)", shmKey); return 0; } printf( "queueOpen - Key 0x%x gives ID %d", shmKey, shmId); qh = shmat(shmId, 0, SHM_RND); I am getting a signal 11 by the code above. Please help

    Read the article

  • Do all the network cards use the same frequency to send signals to wire?

    - by smwikipedia
    I am comparing my cable TV wire to my network wire. In a TV cable wire, different frequencies are used by different TV channels. And since a certain channel use a fixed frequency, I think the only left way to represent different signal is with the carrier wave's amplitude. But what about the network wire? For all the network cards with the same type, do they also use different frequencies to send signals just like TV cable? I vaguely remember that they use frequency adjustment to represent signals. So the frequency should not be a fixed one. So how did all the network cards that sharing the same medium differentiate their own signal from others?

    Read the article

  • Is it possible to connect slots of a model object to the GUI in QT4 -Designer?

    - by Robert0
    So I try to build a Model-View window using QTDesigner and C++. For that reason I created a QOBject derived class as my model. It provides slots and signals to access it like: setFileName(QString) or fileNameChanged(QString). I got a little into using signal drag and drop in QTDesigner and found it quite VA-Smalltalk-Like nice. After a while I was wondering if I could also connect my model to this. So is it possible to somehow introduce my model object into the Window/GUI and let QTDesigner connect signals and slots from the model object to the GUI. In essence: Write for me: connect( model, SIGNAL(fileNameChanged(QString)), ui->labelFn, SLOT(setText(QString))) connect( ui-textEdit2, SIGNAL(textChanged(QString)), model, SLOT(setFileName(QString))) Thanks for explaining

    Read the article

  • Sun Grid Engine : jobs are not well balanced

    - by GlinesMome
    I use Open Grid Scheduler (a fork/copy of Sun Grid Engine). I have tried this configuration from master: # qconf -mattr exechost complex_values slots=8 slave2 # qconf -mq all.q | grep slots slots 100,[slave1=1],[slave2=8] slave1 is down, then I run 10 qsub with a sleep example (so no CPU consumption) but only 4 jobs are run at the same time on slave2 instead of I have put 8 slots. What does I missed ? PS: my goal is to provide infinite slots to force SGE to schedule only via consummable ressources.

    Read the article

  • Does all the network card use the same frequency to send signals to wire?

    - by smwikipedia
    Hi, I am comparing my cable TV wire to my network wire. In a TV cable wire, different frequencies are used by different TV channels. And since a certain channel use a fixed frequency, I think the only left way to represent different signal is with the carrier wave's amplitude. But what about the network wire? For all the network card with the same type, do they also use different frequencies to send signals just like TV cable? I vaguely remember that they use frequency adjustment to represent signals. So the frequency should not be a fixed one. So how did all the network cards that sharing the same medium differentiate their own signal from others?

    Read the article

  • How to design application for scaling the application?

    - by Muhammad
    I have one application which handles hardware events connected on the same computer's PCIe slots. The maximum number of PCIe slots on motherboard are two. I have utilized both slots. Now for scaling the application I need either more PCIe slots in same computer or I use another computer. So consider I am using another computer with same application and hardware connected on the PCIe Slots. Now my problem is that I want to design application over it which can access both computers hardware devices and does the process on it. The processed data should be send back to the respective PC's hardware. Please refer the attached diagram for expansion.

    Read the article

  • Why my computer working without harddisc with livecd, but with hardisk not working - computer not response to any signals?

    - by Yosef
    Hi, History of problem: I formated computer (HP Pavalion Desktop). When I restart computer - computer come to first screen before boot and not response to any signals (f2, f10, ESC, etc..) I take out motherboad battery return after time back and power computer - result : as before I disconnect wires of hard-disk and insert livecd UBUNTU to cd and power coputer: result: works without hard-disk. What is the root of problem: hard-disk broken? hard-disk wires not working well? BIOS? other reason How can I fix the problem?(Buy new hard disk etc...) Thanks, Yosef

    Read the article

  • Laptop's wifi router is not emitting signals therefore my PC and Nokia cell are not connecting to internet

    - by umer sanny
    I have a USB internet connection which I plugged into my laptop. I converted my laptop's Wifi into a Wifi router so that I can attach my PC and cell to this network and enjoy internet, but it doesn't work. I have a Dell Inspiron N5110 i7 Windows 7 64-bit laptop, Dell D865 PC and Nokia C5. My PC and cell are showing that there is a new Wifi connection available, but when I connect to it, it won't load any web page. Reason is my laptop's Wifi is not emitting any internet signals or may be some other issue which i dont know.. laptop's adapter is 802.11b/g/n I don't know why. How can I enjoy Wifi internet on laptop, PC and cell at the same time?

    Read the article

  • Is it any loose coupling mechanism in Objective-C + Cocoa like C# delegates or C++Qt signals+slots?

    - by Eye of Hell
    Hello. For a large programs, the standard way to chalenge a complexity is to divide a program code into small objects. Most of the actual programming languages offer this functionality via classes, so is Objective-C. But after source code is separated into small object, the second challenge is to somehow connect them with each over. Standard approaches, supported by most languages are compositon (one object is a member field of another), inheritance, templates (generics) and callbacks. More cryptic techniques include method-level delagates (C#) and signals+slots (C++Qt). I like the delegates / signals idea, since while connecting two objects i can connect individual methods with each over, without objects knowing anything of each over. For C#, it will look like this: var object1 = new CObject1(); var object2 = new CObject2(); object1.SomethingHappened += object2.HandleSomething; In this code, is object1 calls it's SomethingHappened delegate (like a normal method call) the HandleSomething method of object2 will be called. For C++Qt, it will look like this: var object1 = new CObject1(); var object2 = new CObject2(); connect( object1, SIGNAL(SomethingHappened()), object2, SLOT(HandleSomething()) ); The result will be exactly the same. This technique has some advantages and disadvantages, but generally i like it more than interfaces since if program code base grows i can change connections and add new ones without creating tons of interfaces. After examination of Objective-C i havn't found any way to use this technique i like :(. It seems that Objective-C supports message passing perfectly well, but it requres for object1 to have a pointer to object2 in order to pass it a message. If some object needs to be connected to lots of other objects, in Objective-C i will be forced to give him pointers to each of the objects it must be connected. So, the question :). Is it any approach in Objective-C programming that will closely resemble delegate / signal+slot types of connection, not a 'give first object an entire pointer to second object so it can pass a message to it'. Method-level connections are a bit more preferable to me than object-level connection ^_^.

    Read the article

  • When does a PHP <5.3.0 daemon script receive signals?

    - by MidnightLightning
    I've got a PHP script in the works that is a job worker; its main task is to check a database table for new jobs, and if there are any, to act on them. But jobs will be coming in in bursts, with long gaps in between, so I devised a sleep cycle like: while(true) { if ($jobs = get_new_jobs()) { // Act upon the jobs } else { // No new jobs now sleep(30); } } Good, but in some cases that means there might be a 30 second lag before a new job is acted upon. Since this is a daemon script, I figured I'd try the pcntl_signal hook to catch a SIGUSR1 signal to nudge the script to wake up, like: $_isAwake = true; function user_sig($signo) { global $_isAwake; daemon_log("Caught SIGUSR1"); $_isAwake = true; } pcntl_signal(SIGUSR1, 'user_sig'); while(true) { if ($jobs = get_new_jobs()) { // Act upon the jobs } else { // No new jobs now daemon_log("No new jobs, sleeping..."); $_isAwake = false; $ts = time(); while(time() < $ts+30) { sleep(1); if ($_isAwake) break; // Did a signal happen while we were sleeping? If so, stop sleeping } $_isAwake = true; } } I broke the sleep(30) up into smaller sleep bits, in case a signal doesn't interrupt a sleep() command, thinking that this would cause at most a one-second delay, but in the log file, I'm seeing that the SIGUSR1 isn't being caught until after the full 30 seconds has passed (and maybe the outer while loop resets). I found the pcntl_signal_dispatch command, but that's only for PHP 5.3 and higher. If I were using that version, I could stick a call to that command before the if ($_isAwake) call, but as it currently stands I'm on 5.2.13. On what sort of situations is the signals queue interpreted in PHP versions without the means to explicitly call the queue parsing? Could I put in some other useless command in that sleep loop that would trigger a signal queue parse within there?

    Read the article

  • How can I stream audio signals from various devices/computers to my home server?

    - by Breakthrough
    I currently have a headless home server set up (running Ubuntu 12.04 server edition) running a simple Apache HTTP server. The server is near an audio receiver, which controls a set of indoor and outdoor speakers in my home. Recently, my father purchased a Bluetooth adapter, which our various laptops and cellphones can connect to, outputting the music to the speakers. I was hoping to find a solution that worked over Wi-Fi, namely because it won't cost anything (I already have a server with an audio card), and it doesn't depend on Bluetooth. Is there any cross-platform (preferably free and open-source) solution that I can use which will allow me to stream audio to my home server, over my home network, from a wide variety of devices (laptops running Windows/Linux or cellphones running Android/BB/iOS)? I need something that works at least with Windows and Android. Also, just to clairfy, I want something that simply allows devices to connect to my server and output an audio signal without any action on the server end (since it's a server hidden away near my receiver). Any subsequent connection attempt should be dropped, so only one device can be in control of the stereo at once.

    Read the article

  • dbus signal for volume up & down

    - by jldupont
    I recently upgrade to Ubuntu 10.10 Maverick Meerkat and to my dismay, the mediakeys "volume up" and "volume down" do not send Dbus signals anymore... how can I add these back? Thanks!! Update: it seems that under some circumstances (which I don't know exactly yet), the DBus signals start working again. It is as though when a certain application (TBD) is executed, the dbus signals are re-activated.

    Read the article

  • lots of backbone views - performance issues?

    - by ksol
    tl;dr: I wonder if having lots (100+ for the moment, potentially up to 1000/2000 or more) of backbone views (as a cell of a table) is too heavy or not The project I'm working on revolves around a planning view. There one row per user that covers 6 hours of a day, each hour splitted in 4 slots of 15mn. This planning is used to add "reservations" when clicking on a slot, and should handle hovering of the correct slots, and also handle when it is NOT possible to make a reservation - ie. prevent user click on an "unavailable" slot. There is many reasons why a slot can't be clicked on: the user is not available at this time, or the user is in a reservation; or the app needs to "force" a delay slot between two reservations. Reservations (a div) are rendered in a slot (a cell of a table), and by toying with dimensions, hovers the right number of slots. All this screen is handled with backbone. So For each slot I'm hovering on, I need to check wether I can do a reservation here or not. As of this moment, I use this by toying with the data attributes on the slots : when a reservation object is added, the slots covered are "enhanced with (among others) the reservation object (the backbone view object). But in some cases I don't quite have a grasp on now, it mixes up, and when the reservation view is removed, the slots are not "cleaned up" : the previous class is not reset correctly. It is probably something I've done wrong or badly, but this is only going to get heavier; I think I should use another class of Backbone views here, but I'm afraid the number of slots and thereof of views objects will be high and cause performance issue. I don't know mush about js perf so I'd like to have some feedback before jumping on that train. Any other advice on how to do this would be quite welcomed too. Thanks for your time. If this is not clear enough, tell me, I'll try and rephrase it.

    Read the article

  • Quad channel memory and compatibility

    - by balteo
    My motherboard has quad channel memory compatibility. There are 8 memory slots in all: 4 slots are black the other 4 slots are white. I currently have 4 memory modules of 1 GB each in the 4 white slots. That leaves me with 4 free memory slots. My question is: can I put 4 memory modules of 2 GB each in the 4 remaining slots or do I have to use modules of 1 GB all over? FYI here is the output of lshw: alpha description: Ordinateur Tour produit: Precision WorkStation 690 *-cpu:0 description: CPU produit: Intel(R) Xeon(R) CPU X5355 @ 2.66GHz *-memory description: Mémoire Système identifiant matériel: 1000 emplacement: Carte mère taille: 4GiB *-bank:0 description: FB-DIMM DDR2 FB-DIMM Synchrone 667 MHz (1,5 ns) produit: HYMP512F72CP8N3-Y5 fabriquant: Hynix Semiconductor (Hyundai Electronics) identifiant matériel: 0 numéro de série: 56737501 emplacement: DIMM 1 taille: 1GiB bits: 64 bits horloge: 667MHz (1.5ns) *-bank:1 description: FB-DIMM DDR2 FB-DIMM Synchrone 667 MHz (1,5 ns) produit: HYMP512F72CP8N3-Y5 fabriquant: Hynix Semiconductor (Hyundai Electronics) identifiant matériel: 1 numéro de série: 48115124 emplacement: DIMM 2 taille: 1GiB bits: 64 bits horloge: 667MHz (1.5ns) *-bank:2 description: FB-DIMM DDR2 FB-DIMM Synchrone 667 MHz (1,5 ns) produit: HYMP512F72CP8N3-Y5 fabriquant: Hynix Semiconductor (Hyundai Electronics) identifiant matériel: 2 numéro de série: 48115523 emplacement: DIMM 3 taille: 1GiB bits: 64 bits horloge: 667MHz (1.5ns) *-bank:3 description: FB-DIMM DDR2 FB-DIMM Synchrone 667 MHz (1,5 ns) produit: HYMP512F72CP8N3-Y5 fabriquant: Hynix Semiconductor (Hyundai Electronics) identifiant matériel: 3 numéro de série: 48115424 emplacement: DIMM 4 taille: 1GiB bits: 64 bits horloge: 667MHz (1.5ns) *-bank:4 description: FB-DIMM DDR2 FB-DIMM Synchrone 667 MHz (1,5 ns) [vide] fabriquant: FFFFFFFFFFFF identifiant matériel: 4 numéro de série: FFFFFFFF emplacement: DIMM 5 bits: 64 bits horloge: 667MHz (1.5ns) *-bank:5 description: FB-DIMM DDR2 FB-DIMM Synchrone 667 MHz (1,5 ns) [vide] fabriquant: FFFFFFFFFFFF identifiant matériel: 5 numéro de série: FFFFFFFF emplacement: DIMM 6 bits: 64 bits horloge: 667MHz (1.5ns) *-bank:6 description: FB-DIMM DDR2 FB-DIMM Synchrone 667 MHz (1,5 ns) [vide] fabriquant: FFFFFFFFFFFF identifiant matériel: 6 numéro de série: FFFFFFFF emplacement: DIMM 7 bits: 64 bits horloge: 667MHz (1.5ns) *-bank:7 description: FB-DIMM DDR2 FB-DIMM Synchrone 667 MHz (1,5 ns) [vide] fabriquant: FFFFFFFFFFFF identifiant matériel: 7 numéro de série: FFFFFFFF emplacement: DIMM 8 bits: 64 bits horloge: 667MHz (1.5ns) *-pci:0 description: Host bridge produit: 5000X Chipset Memory Controller Hub fabriquant: Intel Corporation identifiant matériel: 100 information bus: pci@0000:00:00.0 version: 12 bits: 32 bits horloge: 33MHz

    Read the article

  • Android: How to receive process signals in an activity to kill child process ?

    - by user355993
    My application calls Runtime.exec() to launch an executable in a separate process at start up time. I would like this child process to get killed when the parent activity exits. Now I can use onDestroy() to handle regular cases, but not "Force quit", shutdowns from DDMS, or kill from the console since those don't run onDestroy(). The addShutdownHandler() does not seem to be invoked in these cases either. Is there any other hook or signal handler that informs my activity that it's about to get terminated ? As an alternative is there a way to have the system automatically kill the child process when the parent dies ?

    Read the article

  • How to add event listners / signals to a simple superman class?

    - by Kabumbus
    I can and would love to use boost or std for this. Sorry - I am new to C++. So I created a really simple program like: #include <iostream> #include <string> using namespace std; class superman { public: string punch(){return cout << "superman: I hit the bad guy!" << endl;}; }; int main() { superman clark; clark.punch(); cin.get(); } I want to add an event listner that would tell me when clark punched and cout something like "superman punched!". How to add such event listner and event function to my class?

    Read the article

  • Computer Networks UNISA - Chap 8 &ndash; Wireless Networking

    - by MarkPearl
    After reading this section you should be able to Explain how nodes exchange wireless signals Identify potential obstacles to successful transmission and their repercussions, such as interference and reflection Understand WLAN architecture Specify the characteristics of popular WLAN transmission methods including 802.11 a/b/g/n Install and configure wireless access points and their clients Describe wireless MAN and WAN technologies, including 802.16 and satellite communications The Wireless Spectrum All wireless signals are carried through the air by electromagnetic waves. The wireless spectrum is a continuum of the electromagnetic waves used for data and voice communication. The wireless spectrum falls between 9KHZ and 300 GHZ. Characteristics of Wireless Transmission Antennas Each type of wireless service requires an antenna specifically designed for that service. The service’s specification determine the antenna’s power output, frequency, and radiation pattern. A directional antenna issues wireless signals along a single direction. An omnidirectional antenna issues and receives wireless signals with equal strength and clarity in all directions The geographical area that an antenna or wireless system can reach is known as its range Signal Propagation LOS (line of sight) uses the least amount of energy and results in the reception of the clearest possible signal. When there is an obstacle in the way, the signal may… pass through the object or be obsrobed by the object or may be subject to reflection, diffraction or scattering. Reflection – waves encounter an object and bounces off it. Diffraction – signal splits into secondary waves when it encounters an obstruction Scattering – is the diffusion or the reflection in multiple different directions of a signal Signal Degradation Fading occurs as a signal hits various objects. Because of fading, the strength of the signal that reaches the receiver is lower than the transmitted signal strength. The further a signal moves from its source, the weaker it gets (this is called attenuation) Signals are also affected by noise – the electromagnetic interference) Interference can distort and weaken a wireless signal in the same way that noise distorts and weakens a wired signal. Frequency Ranges Older wireless devices used the 2.4 GHZ band to send and receive signals. This had 11 communication channels that are unlicensed. Newer wireless devices can also use the 5 GHZ band which has 24 unlicensed bands Narrowband, Broadband, and Spread Spectrum Signals Narrowband – a transmitter concentrates the signal energy at a single frequency or in a very small range of frequencies Broadband – uses a relatively wide band of the wireless spectrum and offers higher throughputs than narrowband technologies The use of multiple frequencies to transmit a signal is known as spread-spectrum technology. In other words a signal never stays continuously within one frequency range during its transmission. One specific implementation of spread spectrum is FHSS (frequency hoping spread spectrum). Another type is known as DSS (direct sequence spread spectrum) Fixed vs. Mobile Each type of wireless communication falls into one of two categories Fixed – the location of the transmitted and receiver do not move (results in energy saved because weaker signal strength is possible with directional antennas) Mobile – the location can change WLAN (Wireless LAN) Architecture There are two main types of arrangements Adhoc – data is sent directly between devices – good for small local devices Infrastructure mode – a wireless access point is placed centrally, that all devices connect with 802.11 WLANs The most popular wireless standards used on contemporary LANs are those developed by IEEE’s 802.11 committee. Over the years several distinct standards related to wireless networking have been released. Four of the best known standards are also referred to as Wi-Fi. They are…. 802.11b 802.11a 802.11g 802.11n These four standards share many characteristics. i.e. All 4 use half duplex signalling Follow the same access method Access Method 802.11 standards specify the use of CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance) to access a shared medium. Using CSMA/CA before a station begins to send data on an 802.11 network, it checks for existing wireless transmissions. If the source node detects no transmission activity on the network, it waits a brief period of time and then sends its transmission. If the source does detect activity, it waits a brief period of time before checking again. The destination node receives the transmission and, after verifying its accuracy, issues an acknowledgement (ACT) packet to the source. If the source receives the ACK it assumes the transmission was successful, – if it does not receive an ACK it assumes the transmission failed and sends it again. Association Two types of scanning… Active – station transmits a special frame, known as a prove, on all available channels within its frequency range. When an access point finds the probe frame, it issues a probe response. Passive – wireless station listens on all channels within its frequency range for a special signal, known as a beacon frame, issued from an access point – the beacon frame contains information necessary to connect to the point. Re-association occurs when a mobile user moves out of one access point’s range and into the range of another. Frames Read page 378 – 381 about frames and specific 802.11 protocols Bluetooth Networks Sony Ericson originally invented the Bluetooth technology in the early 1990s. In 1998 other manufacturers joined Ericsson in the Special Interest Group (SIG) whose aim was to refine and standardize the technology. Bluetooth was designed to be used on small networks composed of personal communications devices. It has become popular wireless technology for communicating among cellular telephones, phone headsets, etc. Wireless WANs and Internet Access Refer to pages 396 – 402 of the textbook for details.

    Read the article

  • When should ThreadLocal be used instead of Thread.SetData/Thread.GetData?

    - by Jon Ediger
    Prior to .net 4.0, I implemented a solution using named data slots in System.Threading.Thread. Now, in .net 4.0, there is the idea of ThreadLocal. How does ThreadLocal usage compare to named data slots? Does the ThreadLocal value get inherited by children threads? Is the idea that ThreadLocal is a simplified version of using named data slots? An example of some stuff using named data slots follows. Could this be simplified through use of ThreadLocal, and would it retain the same properties as the named data slots? public static void SetSliceName(string slice) { System.Threading.Thread.SetData(System.Threading.Thread.GetNamedDataSlot(SliceVariable), slice); } public static string GetSliceName(bool errorIfNotFound) { var slice = System.Threading.Thread.GetData(System.Threading.Thread.GetNamedDataSlot(SliceVariable)) as string; if (errorIfNotFound && string.IsNullOrEmpty(slice)) {throw new ConfigurationErrorsException("Server slice name not configured.");} return slice; }

    Read the article

  • Need some advice on Core Data modeling strategy

    - by Andy
    I'm working on an iPhone app and need a little advice on modeling the Core Data schema. My idea is a utility that allows the user to speed-dial their contacts using user-created rules based on the time of day. In other words, I would tell the app that my wife is commuting from 6am to 7am, at work from 7am to 4pm, commuting from 4pm to 5pm, and home from 5pm to 6am, Monday through Friday. Then, when I tap her name in my app, it would select the number to dial based on the current day and time. I have the user interface nearly complete (thanks in no small part to help I've received here), but now I've got some questions regarding the persistent store. The user can select start- and stop-times in 5-minute increments. This means there are 2,016 possible "time slots" in week (7 days * 24 hours * 12 5-minute intervals per hour). I see a few options for setting this up. Option #1: One array of time slots, with 2,016 entries. Each entry would be a dictionary containing a contact identifier and an associated phone number to dial. I think this means I'd need a "Contact" entity to store the contact information, and a "TimeSlot" entity for each of the 2,016 possible time slots. Option #2: Each Contact has its own array of time slots, each with 2,016 entries. Each array entry would simply be a string indicating which phone number to dial. Option #3: Each Contact has a dictionary of time slots. An entry would only be added to the dictionary for time slots with an active rule. If a search for, say, time slot 1,299 (Friday 12:15pm) didn't find a key @"1299" in the dictionary, then a default number would be dialed instead. I'm not sure any of these is the "right" way or the "best" way. I'm not even sure I need to use Core Data to manage it; maybe just saving arrays would be simpler. Any input you can offer would be appreciated.

    Read the article

  • PTLQueue : a scalable bounded-capacity MPMC queue

    - by Dave
    Title: Fast concurrent MPMC queue -- I've used the following concurrent queue algorithm enough that it warrants a blog entry. I'll sketch out the design of a fast and scalable multiple-producer multiple-consumer (MPSC) concurrent queue called PTLQueue. The queue has bounded capacity and is implemented via a circular array. Bounded capacity can be a useful property if there's a mismatch between producer rates and consumer rates where an unbounded queue might otherwise result in excessive memory consumption by virtue of the container nodes that -- in some queue implementations -- are used to hold values. A bounded-capacity queue can provide flow control between components. Beware, however, that bounded collections can also result in resource deadlock if abused. The put() and take() operators are partial and wait for the collection to become non-full or non-empty, respectively. Put() and take() do not allocate memory, and are not vulnerable to the ABA pathologies. The PTLQueue algorithm can be implemented equally well in C/C++ and Java. Partial operators are often more convenient than total methods. In many use cases if the preconditions aren't met, there's nothing else useful the thread can do, so it may as well wait via a partial method. An exception is in the case of work-stealing queues where a thief might scan a set of queues from which it could potentially steal. Total methods return ASAP with a success-failure indication. (It's tempting to describe a queue or API as blocking or non-blocking instead of partial or total, but non-blocking is already an overloaded concurrency term. Perhaps waiting/non-waiting or patient/impatient might be better terms). It's also trivial to construct partial operators by busy-waiting via total operators, but such constructs may be less efficient than an operator explicitly and intentionally designed to wait. A PTLQueue instance contains an array of slots, where each slot has volatile Turn and MailBox fields. The array has power-of-two length allowing mod/div operations to be replaced by masking. We assume sensible padding and alignment to reduce the impact of false sharing. (On x86 I recommend 128-byte alignment and padding because of the adjacent-sector prefetch facility). Each queue also has PutCursor and TakeCursor cursor variables, each of which should be sequestered as the sole occupant of a cache line or sector. You can opt to use 64-bit integers if concerned about wrap-around aliasing in the cursor variables. Put(null) is considered illegal, but the caller or implementation can easily check for and convert null to a distinguished non-null proxy value if null happens to be a value you'd like to pass. Take() will accordingly convert the proxy value back to null. An advantage of PTLQueue is that you can use atomic fetch-and-increment for the partial methods. We initialize each slot at index I with (Turn=I, MailBox=null). Both cursors are initially 0. All shared variables are considered "volatile" and atomics such as CAS and AtomicFetchAndIncrement are presumed to have bidirectional fence semantics. Finally T is the templated type. I've sketched out a total tryTake() method below that allows the caller to poll the queue. tryPut() has an analogous construction. Zebra stripping : alternating row colors for nice-looking code listings. See also google code "prettify" : https://code.google.com/p/google-code-prettify/ Prettify is a javascript module that yields the HTML/CSS/JS equivalent of pretty-print. -- pre:nth-child(odd) { background-color:#ff0000; } pre:nth-child(even) { background-color:#0000ff; } border-left: 11px solid #ccc; margin: 1.7em 0 1.7em 0.3em; background-color:#BFB; font-size:12px; line-height:65%; " // PTLQueue : Put(v) : // producer : partial method - waits as necessary assert v != null assert Mask = 1 && (Mask & (Mask+1)) == 0 // Document invariants // doorway step // Obtain a sequence number -- ticket // As a practical concern the ticket value is temporally unique // The ticket also identifies and selects a slot auto tkt = AtomicFetchIncrement (&PutCursor, 1) slot * s = &Slots[tkt & Mask] // waiting phase : // wait for slot's generation to match the tkt value assigned to this put() invocation. // The "generation" is implicitly encoded as the upper bits in the cursor // above those used to specify the index : tkt div (Mask+1) // The generation serves as an epoch number to identify a cohort of threads // accessing disjoint slots while s-Turn != tkt : Pause assert s-MailBox == null s-MailBox = v // deposit and pass message Take() : // consumer : partial method - waits as necessary auto tkt = AtomicFetchIncrement (&TakeCursor,1) slot * s = &Slots[tkt & Mask] // 2-stage waiting : // First wait for turn for our generation // Acquire exclusive "take" access to slot's MailBox field // Then wait for the slot to become occupied while s-Turn != tkt : Pause // Concurrency in this section of code is now reduced to just 1 producer thread // vs 1 consumer thread. // For a given queue and slot, there will be most one Take() operation running // in this section. // Consumer waits for producer to arrive and make slot non-empty // Extract message; clear mailbox; advance Turn indicator // We have an obvious happens-before relation : // Put(m) happens-before corresponding Take() that returns that same "m" for T v = s-MailBox if v != null : s-MailBox = null ST-ST barrier s-Turn = tkt + Mask + 1 // unlock slot to admit next producer and consumer return v Pause tryTake() : // total method - returns ASAP with failure indication for auto tkt = TakeCursor slot * s = &Slots[tkt & Mask] if s-Turn != tkt : return null T v = s-MailBox // presumptive return value if v == null : return null // ratify tkt and v values and commit by advancing cursor if CAS (&TakeCursor, tkt, tkt+1) != tkt : continue s-MailBox = null ST-ST barrier s-Turn = tkt + Mask + 1 return v The basic idea derives from the Partitioned Ticket Lock "PTL" (US20120240126-A1) and the MultiLane Concurrent Bag (US8689237). The latter is essentially a circular ring-buffer where the elements themselves are queues or concurrent collections. You can think of the PTLQueue as a partitioned ticket lock "PTL" augmented to pass values from lock to unlock via the slots. Alternatively, you could conceptualize of PTLQueue as a degenerate MultiLane bag where each slot or "lane" consists of a simple single-word MailBox instead of a general queue. Each lane in PTLQueue also has a private Turn field which acts like the Turn (Grant) variables found in PTL. Turn enforces strict FIFO ordering and restricts concurrency on the slot mailbox field to at most one simultaneous put() and take() operation. PTL uses a single "ticket" variable and per-slot Turn (grant) fields while MultiLane has distinct PutCursor and TakeCursor cursors and abstract per-slot sub-queues. Both PTL and MultiLane advance their cursor and ticket variables with atomic fetch-and-increment. PTLQueue borrows from both PTL and MultiLane and has distinct put and take cursors and per-slot Turn fields. Instead of a per-slot queues, PTLQueue uses a simple single-word MailBox field. PutCursor and TakeCursor act like a pair of ticket locks, conferring "put" and "take" access to a given slot. PutCursor, for instance, assigns an incoming put() request to a slot and serves as a PTL "Ticket" to acquire "put" permission to that slot's MailBox field. To better explain the operation of PTLQueue we deconstruct the operation of put() and take() as follows. Put() first increments PutCursor obtaining a new unique ticket. That ticket value also identifies a slot. Put() next waits for that slot's Turn field to match that ticket value. This is tantamount to using a PTL to acquire "put" permission on the slot's MailBox field. Finally, having obtained exclusive "put" permission on the slot, put() stores the message value into the slot's MailBox. Take() similarly advances TakeCursor, identifying a slot, and then acquires and secures "take" permission on a slot by waiting for Turn. Take() then waits for the slot's MailBox to become non-empty, extracts the message, and clears MailBox. Finally, take() advances the slot's Turn field, which releases both "put" and "take" access to the slot's MailBox. Note the asymmetry : put() acquires "put" access to the slot, but take() releases that lock. At any given time, for a given slot in a PTLQueue, at most one thread has "put" access and at most one thread has "take" access. This restricts concurrency from general MPMC to 1-vs-1. We have 2 ticket locks -- one for put() and one for take() -- each with its own "ticket" variable in the form of the corresponding cursor, but they share a single "Grant" egress variable in the form of the slot's Turn variable. Advancing the PutCursor, for instance, serves two purposes. First, we obtain a unique ticket which identifies a slot. Second, incrementing the cursor is the doorway protocol step to acquire the per-slot mutual exclusion "put" lock. The cursors and operations to increment those cursors serve double-duty : slot-selection and ticket assignment for locking the slot's MailBox field. At any given time a slot MailBox field can be in one of the following states: empty with no pending operations -- neutral state; empty with one or more waiting take() operations pending -- deficit; occupied with no pending operations; occupied with one or more waiting put() operations -- surplus; empty with a pending put() or pending put() and take() operations -- transitional; or occupied with a pending take() or pending put() and take() operations -- transitional. The partial put() and take() operators can be implemented with an atomic fetch-and-increment operation, which may confer a performance advantage over a CAS-based loop. In addition we have independent PutCursor and TakeCursor cursors. Critically, a put() operation modifies PutCursor but does not access the TakeCursor and a take() operation modifies the TakeCursor cursor but does not access the PutCursor. This acts to reduce coherence traffic relative to some other queue designs. It's worth noting that slow threads or obstruction in one slot (or "lane") does not impede or obstruct operations in other slots -- this gives us some degree of obstruction isolation. PTLQueue is not lock-free, however. The implementation above is expressed with polite busy-waiting (Pause) but it's trivial to implement per-slot parking and unparking to deschedule waiting threads. It's also easy to convert the queue to a more general deque by replacing the PutCursor and TakeCursor cursors with Left/Front and Right/Back cursors that can move either direction. Specifically, to push and pop from the "left" side of the deque we would decrement and increment the Left cursor, respectively, and to push and pop from the "right" side of the deque we would increment and decrement the Right cursor, respectively. We used a variation of PTLQueue for message passing in our recent OPODIS 2013 paper. ul { list-style:none; padding-left:0; padding:0; margin:0; margin-left:0; } ul#myTagID { padding: 0px; margin: 0px; list-style:none; margin-left:0;} -- -- There's quite a bit of related literature in this area. I'll call out a few relevant references: Wilson's NYU Courant Institute UltraComputer dissertation from 1988 is classic and the canonical starting point : Operating System Data Structures for Shared-Memory MIMD Machines with Fetch-and-Add. Regarding provenance and priority, I think PTLQueue or queues effectively equivalent to PTLQueue have been independently rediscovered a number of times. See CB-Queue and BNPBV, below, for instance. But Wilson's dissertation anticipates the basic idea and seems to predate all the others. Gottlieb et al : Basic Techniques for the Efficient Coordination of Very Large Numbers of Cooperating Sequential Processors Orozco et al : CB-Queue in Toward high-throughput algorithms on many-core architectures which appeared in TACO 2012. Meneghin et al : BNPVB family in Performance evaluation of inter-thread communication mechanisms on multicore/multithreaded architecture Dmitry Vyukov : bounded MPMC queue (highly recommended) Alex Otenko : US8607249 (highly related). John Mellor-Crummey : Concurrent queues: Practical fetch-and-phi algorithms. Technical Report 229, Department of Computer Science, University of Rochester Thomasson : FIFO Distributed Bakery Algorithm (very similar to PTLQueue). Scott and Scherer : Dual Data Structures I'll propose an optimization left as an exercise for the reader. Say we wanted to reduce memory usage by eliminating inter-slot padding. Such padding is usually "dark" memory and otherwise unused and wasted. But eliminating the padding leaves us at risk of increased false sharing. Furthermore lets say it was usually the case that the PutCursor and TakeCursor were numerically close to each other. (That's true in some use cases). We might still reduce false sharing by incrementing the cursors by some value other than 1 that is not trivially small and is coprime with the number of slots. Alternatively, we might increment the cursor by one and mask as usual, resulting in a logical index. We then use that logical index value to index into a permutation table, yielding an effective index for use in the slot array. The permutation table would be constructed so that nearby logical indices would map to more distant effective indices. (Open question: what should that permutation look like? Possibly some perversion of a Gray code or De Bruijn sequence might be suitable). As an aside, say we need to busy-wait for some condition as follows : "while C == 0 : Pause". Lets say that C is usually non-zero, so we typically don't wait. But when C happens to be 0 we'll have to spin for some period, possibly brief. We can arrange for the code to be more machine-friendly with respect to the branch predictors by transforming the loop into : "if C == 0 : for { Pause; if C != 0 : break; }". Critically, we want to restructure the loop so there's one branch that controls entry and another that controls loop exit. A concern is that your compiler or JIT might be clever enough to transform this back to "while C == 0 : Pause". You can sometimes avoid this by inserting a call to a some type of very cheap "opaque" method that the compiler can't elide or reorder. On Solaris, for instance, you could use :"if C == 0 : { gethrtime(); for { Pause; if C != 0 : break; }}". It's worth noting the obvious duality between locks and queues. If you have strict FIFO lock implementation with local spinning and succession by direct handoff such as MCS or CLH,then you can usually transform that lock into a queue. Hidden commentary and annotations - invisible : * And of course there's a well-known duality between queues and locks, but I'll leave that topic for another blog post. * Compare and contrast : PTLQ vs PTL and MultiLane * Equivalent : Turn; seq; sequence; pos; position; ticket * Put = Lock; Deposit Take = identify and reserve slot; wait; extract & clear; unlock * conceptualize : Distinct PutLock and TakeLock implemented as ticket lock or PTL Distinct arrival cursors but share per-slot "Turn" variable provides exclusive role-based access to slot's mailbox field put() acquires exclusive access to a slot for purposes of "deposit" assigns slot round-robin and then acquires deposit access rights/perms to that slot take() acquires exclusive access to slot for purposes of "withdrawal" assigns slot round-robin and then acquires withdrawal access rights/perms to that slot At any given time, only one thread can have withdrawal access to a slot at any given time, only one thread can have deposit access to a slot Permissible for T1 to have deposit access and T2 to simultaneously have withdrawal access * round-robin for the purposes of; role-based; access mode; access role mailslot; mailbox; allocate/assign/identify slot rights; permission; license; access permission; * PTL/Ticket hybrid Asymmetric usage ; owner oblivious lock-unlock pairing K-exclusion add Grant cursor pass message m from lock to unlock via Slots[] array Cursor performs 2 functions : + PTL ticket + Assigns request to slot in round-robin fashion Deconstruct protocol : explication put() : allocate slot in round-robin fashion acquire PTL for "put" access store message into slot associated with PTL index take() : Acquire PTL for "take" access // doorway step seq = fetchAdd (&Grant, 1) s = &Slots[seq & Mask] // waiting phase while s-Turn != seq : pause Extract : wait for s-mailbox to be full v = s-mailbox s-mailbox = null Release PTL for both "put" and "take" access s-Turn = seq + Mask + 1 * Slot round-robin assignment and lock "doorway" protocol leverage the same cursor and FetchAdd operation on that cursor FetchAdd (&Cursor,1) + round-robin slot assignment and dispersal + PTL/ticket lock "doorway" step waiting phase is via "Turn" field in slot * PTLQueue uses 2 cursors -- put and take. Acquire "put" access to slot via PTL-like lock Acquire "take" access to slot via PTL-like lock 2 locks : put and take -- at most one thread can access slot's mailbox Both locks use same "turn" field Like multilane : 2 cursors : put and take slot is simple 1-capacity mailbox instead of queue Borrow per-slot turn/grant from PTL Provides strict FIFO Lock slot : put-vs-put take-vs-take at most one put accesses slot at any one time at most one put accesses take at any one time reduction to 1-vs-1 instead of N-vs-M concurrency Per slot locks for put/take Release put/take by advancing turn * is instrumental in ... * P-V Semaphore vs lock vs K-exclusion * See also : FastQueues-excerpt.java dice-etc/queue-mpmc-bounded-blocking-circular-xadd/ * PTLQueue is the same as PTLQB - identical * Expedient return; ASAP; prompt; immediately * Lamport's Bakery algorithm : doorway step then waiting phase Threads arriving at doorway obtain a unique ticket number Threads enter in ticket order * In the terminology of Reed and Kanodia a ticket lock corresponds to the busy-wait implementation of a semaphore using an eventcount and a sequencer It can also be thought of as an optimization of Lamport's bakery lock was designed for fault-tolerance rather than performance Instead of spinning on the release counter, processors using a bakery lock repeatedly examine the tickets of their peers --

    Read the article

  • With Google DFP (Small Business) is it possible to disable AdSense in an Ad Slot on a per-request basis?

    - by Daniel Pehrson
    Setup: I run a network of websites that target different hobby niches and have a section dedicated to community classifieds. I serve advertising on these sites through Google DFP for Small Business with AdSense enabled on the slots. Problem: One of the next sites in my network will be targeting the firearms/shooting industry and as such the classifieds section will not comply with the prohibited content guidelines of AdSense regarding the sale (or coordination of sale) of weapons. I work very hard to comply with the guidelines of my partners even if I don't understand/agree with them and after talking with many people have decided that the best option is to disable AdSense serving on that section of that website, while leaving it on for the rest of the network. Solution: Right now my only idea for this is to duplicate all my site's ad slots and tack a "_sensitive" onto the end of each one (eg. header and header_sensitive) conditionally registering ad slots based on whether or not I am in the sensitive section of the sensitive site. My hope however is that there may be a way to accomplish this without duplicating all my ad slots possibly with some sort of options to the GA_googleFillSlot() call that allows me to say "load ads from this slot but do not serve AdSense no matter what."

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >