Search Results

Search found 5026 results on 202 pages for 'blocked threads'.

Page 3/202 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • two possible wifi devices competing, one is hard blocked

    - by patrickmw
    blacklisted acer_wmi because that was showing up in the rfkill list then ideapad_wlan was listed $ rfkill list wifi 1: ideapad_wlan: Wireless LAN Soft blocked: no Hard blocked: no 3: brcmwl-0: Wireless LAN Soft blocked: no Hard blocked: yes $ lshw -C network *-network description: Ethernet interface product: AR8131 Gigabit Ethernet vendor: Atheros Communications physical id: 0 bus info: pci@0000:03:00.0 logical name: eth0 version: c0 serial: f0:de:f1:12:21:e9 size: 1Gbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vpd bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1c driverversion=1.0.1.0-NAPI duplex=full firmware=N/A ip=192.168.1.139 latency=0 link=yes multicast=yes port=twisted pair speed=1Gbit/s resources: irq:42 memory:f0400000-f043ffff ioport:2000(size=128) *-network description: Wireless interface product: BCM4313 802.11b/g/n Wireless LAN Controller vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:04:00.0 logical name: eth1 version: 01 serial: ac:81:12:38:ba:89 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=wl0 driverversion=5.100.82.38 latency=0 multicast=yes wireless=IEEE 802.11 resources: irq:17 memory:f0500000-f0503fff I'm not sure how to disable the wifi devices independently. I'm also not sure which device is the correct one. I think its the brcmw device. Any suggestions?

    Read the article

  • How to signal a buffer full state between posix threads

    - by mikip
    Hi I have two threads, the main thread 'A' is responsible for message handling between a number of processes. When thread A gets a buffer full message, it should inform thread B and pass a pointer to the buffer which thread B will then process. When thread B has finished it should inform thread A that it has finished. How do I go about implementing this using posix threads using C on linux. I have looked at conditional variables, is this the way to go? . I'm not experienced in multi threaded programming and would like some advice on the best avenue to take. Thanks

    Read the article

  • Java: Using multiple Threads to paint simultaniously on a JPanel

    - by user347217
    I have a JPanel on which I wish to have several threads painting "animations" on. An "animation" consists of a JLabel with an ImageIcon on it, which is being moved from one area of the screen to another area. Now, problem is - I want several such animations to be portrayed on screen by those threads mentioned. Problem is - the JPanel's "paint()" method can only be trigged by one thread at a time - causing the animations to execute serially, instead of in a parallel way. Any idea how to have several such animations on screen at the same time?

    Read the article

  • Linked list example using threads

    - by Carl_1789
    I have read the following code of using CRITICAL_SECTION when working with multiple threads to grow a linked list. what would be the main() part which uses two threads to add to linked list? #include <windows.h> typedef struct _Node { struct _Node *next; int data; } Node; typedef struct _List { Node *head; CRITICAL_SECTION critical_sec; } List; List *CreateList() { List *pList = (List*)malloc(sizeof(pList)); pList->head = NULL; InitializeCriticalSection(&pList->critical_sec); return pList; } void AddHead(List *pList, Node *node) { EnterCriticalSection(&pList->critical_sec); node->next = pList->head; pList->head = node; LeaveCriticalSection(&pList->critical_sec); } void Insert(List *pList, Node *afterNode, Node *newNode) { EnterCriticalSection(&pList->critical_sec); if (afterNode == NULL) { AddHead(pList, newNode); } else { newNode->next = afterNode->next; afterNode->next = newNode; } LeaveCriticalSection(&pList->critical_sec); } Node *Next(List *pList, Node *node) { Node* next; EnterCriticalSection(&pList->critical_sec); next = node->next; LeaveCriticalSection(&pList->critical_sec); return next; }

    Read the article

  • Need help with threads in a client/server

    - by nunos
    For college, I am developing a local relay chat. I have to program a chat server and client that will only work on sending messages on different terminal windows on the same computer with threads and fifos. The fifos part I am having no trouble, the threads part is the one that is giving me some headaches. The server has one thread for receiving commands from a fifo (used by all clients) and another thread for each client that is connected. For each client that is connected I need to know a certain information. Firstly, I was using global variables, which worked as longs as there was only one client connected, which is much of a chat, to chat alone. So, ideally I would have some data like: -nickname -name -email -etc... per client that is connected. However, I don't know how to do that. I could create a client_data[MAX_NUMBER_OF_THREADS] where client_data was a struct with everything I needed to have access to, but this would require to, in every communication between server and client to ask for the id of the client in the array client_data and that does not seem very pratical I could also instantiate a client_data immediately after creating the thread but it would only be available in that block, and that is not very pratical either. As you can see I am in need of a little guidance here. Any comment, piece of code or link to any relevant information is greatly appreciated. Thanks.

    Read the article

  • Why is CoRegisterClassObject creating two extra threads?

    - by Stijn Sanders
    I'm trying to fix a problem that only recently happens on a number of machine's on a VPN. They each run a client application I wrote that exposes a COM automation object. For some strange reason I haven't been able to discover yet, one thread in the application takes up all of the available CPU time, slowing other operation on the machine. In observing the application's strange behaviour, I've noticed it's the third thread started, and if I debug on my machine I notice the first call to CoRegisterClassObject created two extra threads. If the second of these two threads is the one that gets into an infinite loop, I'm not at all shure how to fix this. Where could I check next about what's wrong? Could it have started by one of the recent patches rolled out by Microsoft this last 'patch tuesday'? I had a go with ProcessExplorer to extract a stack trace of the thread: ntoskrnl.exe!ExReleaseResourceLite+0x1a3 ntoskrnl.exe!PsGetContextThread+0x329 WLDAP32.dll!Ordinal325+0x1231 WLDAP32.dll!Ordinal325+0x129e WLDAP32.dll!Ordinal325+0x1178 ntdll.dll!LdrInitializeThunk+0x24 ntdll.dll!LdrShutdownThread+0xe9 kernel32.dll!ExitThread+0x3e kernel32.dll!FreeLibraryAndExitThread+0x1e ole32.dll!StringFromGUID2+0x65d kernel32.dll!GetModuleFileNameA+0x1ba

    Read the article

  • Concurrent Threads in C# using BackgroundWorker

    - by Jim Fell
    My C# application is such that a background worker is being used to wait for the acknowledgement of some transmitted data. Here is some psuedo code demonstrating what I'm trying to do: UI_thread { TransmitData() { // load data for tx // fire off TX background worker } RxSerialData() { // if received data is ack, set ack received flag } } TX_thread { // transmit data // set ack wait timeout // fire off ACK background worker // wait for ACK background worker to complete // evaluate status of ACK background worker as completed, failed, etc. } ACK_thread { // wait for ack received flag to be set } What happens is that the ACK BackgroundWorker times out, and the acknowledgement is never received. I'm fairly certain that it is being transmitted by the remote device because that device has not changed at all, and the C# application is transmitting. I have changed the ack thread from this (when it was working)... for( i = 0; (i < waitTimeoutVar) && (!bAckRxd); i++ ) { System.Threading.Thread.Sleep(1); } ...to this... DateTime dtThen = DateTime.Now(); DateTime dtNow; TimeSpan stTime; do { dtNow = DateTime.Now(); stTime = dtNow - dtThen; } while ( (stTime.TotalMilliseconds < waitTimeoutVar) && (!bAckRxd) ); The latter generates a very acurate wait time, as compared to the former. However, I am wondering if removal of the Sleep function is interferring with the ability to receive serial data. Does C# only allow one thread to run at a time, that is, do I have to put threads to sleep at some time to allow other threads to run? Any thoughts or suggestions you may have would be appreciated. I am using Microsoft Visual C# 2008 Express Edition. Thanks.

    Read the article

  • Why does my ActivePerl program report 'Sorry. Ran out of threads'?

    - by Zaid
    Tom Christiansen's example code (à la perlthrtut) is a recursive, threaded implementation of finding and printing all prime numbers between 3 and 1000. Below is a mildly adapted version of the script #!/usr/bin/perl # adapted from prime-pthread, courtesy of Tom Christiansen use strict; use warnings; use threads; use Thread::Queue; sub check_prime { my ($upstream,$cur_prime) = @_; my $child; my $downstream = Thread::Queue->new; while (my $num = $upstream->dequeue) { next unless ($num % $cur_prime); if ($child) { $downstream->enqueue($num); } else { $child = threads->create(\&check_prime, $downstream, $num); if ($child) { print "This is thread ",$child->tid,". Found prime: $num\n"; } else { warn "Sorry. Ran out of threads.\n"; last; } } } if ($child) { $downstream->enqueue(undef); $child->join; } } my $stream = Thread::Queue->new(3..shift,undef); check_prime($stream,2); When run on my machine (under ActiveState & Win32), the code was capable of spawning only 118 threads (last prime number found: 653) before terminating with a 'Sorry. Ran out of threads' warning. In trying to figure out why I was limited to the number of threads I could create, I replaced the use threads; line with use threads (stack_size => 1);. The resultant code happily dealt with churning out 2000+ threads. Can anyone explain this behavior?

    Read the article

  • [newbie]Webservice - Threads - close if no active threads, if any activen, then wait till its compl

    - by Raj
    Hi I have an ASP.Net webservice running. When the system date changes, i want to close all the threads and restart them again, if the thread/s aren't doing any work. If any of the thread/s is in the process of doing some work(active threads), then i need to wait till the current thread/s complete its process and then close. This will prevent from aborting any current process. Any ideas on how i can perform such a task? Thanks PS:I am new to ASP.NET and Multithreading.

    Read the article

  • Perl Threads with binmode

    - by jAndy
    Hi Folks, this call my $th = threads->create(\&print, "Hello thread World!\n"); $th->join(); works fine. But as soon as I add binmode(STDOUT, ":encoding(ISO-8859-1)"); to my script file, I get an error like "segmentation fault", "access denied". What is wrong to define an encoding type when trying to call a perl thread? Kind Regards --Andy

    Read the article

  • How to learn about Threads, Especially in Java.

    - by Javed Ahamed
    I have always been kind of confused by threads, and my class right now makes heavy use of them. We are using java.util.concurrent but I don't even really get the basics. UpDownLatch, Futures, Executors; these words just fly over my head. Can you guys suggest any resources to help learn what I need from the ground up? Thanks a lot in advance!

    Read the article

  • Queuing methods to be run on an object by different threads in Python

    - by Ben
    Let's say I have an object who's class definition looks like: class Command: foo = 5 def run(self, bar): time.sleep(1) self.foo = bar return self.foo If this class is instantiated once, but different threads are hitting its run method (via an HTTP request, handled separately) passing in different args, what is the best method to queue them? Can this be done in the class definition itself?

    Read the article

  • Using events in threads between processes - C

    - by Jamie Keeling
    Hello all! I have an application consisting of two windows, one communicates to the other and sends it a struct constaining two integers (In this case two rolls of a dice). I will be using events for the following circumstances: Process a sends data to process b, process b displays data Process a closes, in turn closing process b Process b closes a, in turn closing process a I have noticed that if the second process is constantly waiting for the first process to send data then the program will be just sat waiting, which is where the idea of implementing threads on each process occurred and I have started to implement this already. The problem i'm having is that I don't exactly have a lot of experience with threads and events so I'm not sure of the best way to actually implement what I want to do. Following is a small snippet of what I have so far in the producer application; Create thread: case IDM_FILE_ROLLDICE: { hDiceRoll = CreateThread( NULL, // lpThreadAttributes (default) 0, // dwStackSize (default) ThreadFunc(hMainWindow), // lpStartAddress NULL, // lpParameter 0, // dwCreationFlags &hDiceID // lpThreadId (returned by function) ); } break; The data being sent to the other process: DWORD WINAPI ThreadFunc(LPVOID passedHandle) { HANDLE hMainHandle = *((HANDLE*)passedHandle); WCHAR buffer[256]; LPCTSTR pBuf; LPVOID lpMsgBuf; LPVOID lpDisplayBuf; struct diceData storage; HANDLE hMapFile; DWORD dw; //Roll dice and store results in variable storage = RollDice(); hMapFile = CreateFileMapping( (HANDLE)0xFFFFFFFF, // use paging file NULL, // default security PAGE_READWRITE, // read/write access 0, // maximum object size (high-order DWORD) BUF_SIZE, // maximum object size (low-order DWORD) szName); // name of mapping object if (hMapFile == NULL) { dw = GetLastError(); MessageBox(hMainHandle,L"Could not create file mapping object",L"Error",MB_OK); return 1; } pBuf = (LPTSTR) MapViewOfFile(hMapFile, // handle to map object FILE_MAP_ALL_ACCESS, // read/write permission 0, 0, BUF_SIZE); if (pBuf == NULL) { MessageBox(hMainHandle,L"Could not map view of file",L"Error",MB_OK); CloseHandle(hMapFile); return 1; } CopyMemory((PVOID)pBuf, &storage, (_tcslen(szMsg) * sizeof(TCHAR))); //_getch(); MessageBox(hMainHandle,L"Completed!",L"Success",MB_OK); UnmapViewOfFile(pBuf); return 0; } I'm trying to find out how I would integrate an event with the threaded code to signify to the other process that something has happened, I've seen an MSDN article on using events but it's just confused me if anything, I'm coming up on empty whilst searching on the internet too. Thanks for any help Edit: I can only use the Create/Set/Open methods for events, sorry for not mentioning it earlier.

    Read the article

  • BSoD when creating threads

    - by Behrooz
    I am trying to create +5 threads synchronously so there shouldn't be any concurrency error. Code: System.Threading.Thread t = new System.Threading.Thread(proc); t.Start();//==t.BlueScreen(); t.Join(); Is darkness a feature ? I am doing something wrong? OS:Microsoft windows vista(unfortunately) x64 Language:C# 3.0|4.0 .Net version:3.5|4

    Read the article

  • Understanding java's native threads and the jvm

    - by Moev4
    I understand that the jvm is itself an application that turns the bytecode of the java executable into native machine code, but when using native threads I have some questions that I just cannot seem to answer. Does every thread create their own instance of the jvm to handle their particular execution? If not then does the jvm have to have some way to schedule which thread it will handle next, if so wouldn't this render the multi-threaded nature of java useless since only one thread can be ran at a time?

    Read the article

  • C threads question

    - by Nadeem
    Write program that has two threads one is reading numbers from the user and the other is printing them such that the first thread reads a new number only after it has been printed by the second thread. Declare one global varaible to use for reading and printing.

    Read the article

  • Generate a proper 404 page for blocked sites via /etc/hosts instead of redirect to localhost

    - by Mixhael
    I have blocked some websites by editing /etc/hosts and adding several newline-entries in the following manner: 0.0.0.0 www.domain.com And it works. The only thing is: when a website is visited which is blocked, the browser is redirected to my http://localhost, resulting in a directory listing or website-presentation that is running within my localhost root-environment. It's not a very big problem, but I prefer a standard error that mentions the website cannot be visited (for instance by a 404 page). Is this possible?

    Read the article

  • Critical Threads Optimization

    - by Rafael Vanoni
    Background One of the more common issues we've been seeing in the field is the growing difficulty in optimizing performance of multi-threaded applications. A good portion of this difficulty is due to the increasing complexity of modern processors that present various degrees of sharing relationships between hardware components. Take any current CMT processor and you'll find any number of CPUs sharing execution pipelines, floating point units, caches, etc. Consequently, applying the traditional recipe of one software thread for each CPU will have varying degrees of success, according to the layout of the underlying hardware. On top of this increasing complexity we've also seen processors with features that aim at dynamically resourcing software threads according to their utilization. Intel's Turbo Boost allows processors to increase their operating frequency if there is enough thermal headroom available and the processor isn't fully utilized. More recently, the SPARC T4 processor introduced dynamic threading, allowing each core to dynamically allocate more resources to its active CPUs. Both cases are in essence recognizing that current processors will be running a wide mix of workloads, some will be designed for throughput, others for low latency. The hardware is providing mechanisms to dynamically resource threads according to their runtime behavior. We're very aware of these challenges in Solaris, and have been working to provide the best out of box performance while providing mechanisms to further optimize applications when necessary. The Critical Threads Optimzation was introduced in Solaris 10 8/11 and Solaris 11 as one such mechanism that allows customers to both address issues caused by contention over shared hardware resources and explicitly take advantage of features such as T4's dynamic threading. What it is The basic idea is to allow performance critical threads to execute with more exclusive access to hardware resources. For example, when deploying an application that implements a producer/consumer model, it'll likely be advantageous to give the producer more exclusive access to the hardware instead of having it competing for resources with all the consumers. In the case of a T4 based system, we may want to have a producer running by itself on a single core and create one consumer for each of the remaining CPUs. With the Critical Threads Optimization we're extending the semantics of scheduling priorities (which thread should run first) to include priority over shared resources (which thread should have more "space"). Now the scheduler will not only run higher priority threads first: it will also provide them with more exclusive access to hardware resources if they are available. How does it work ? Using the previous example in Solaris 11, all you'd have to do would be to place the producer in the Fixed Priority (FX) scheduling class at priority 60, or in the Real Time (RT) class at any priority and Solaris will try to give it more "hardware space". On both Solaris 10 8/11 and Solaris 11 this can be achieved through the existing priocntl(1,2) and priocntlset(2) interfaces. If your application already assigns these priorities to performance critical threads, there's no additional step you need to take. One important aspect of this optimization is that it requires some level of idleness in the system, either as a result of sizing the application before hand or through periods of transient idleness during runtime. If the system is fully committed, the scheduler will put all the available CPUs to work.Best practices If you're an application developer, we encourage you to look into assigning the right priorities for the different threads in your application. Solaris provides different scheduling classes (Time Share, Interactive, Fair Share, Fixed Priority and Real Time) that offer different policies and behaviors. It is not always simple to figure out which set of threads are critical to the performance of a workload, and it may not always be feasible to take advantage of this optimization, but we believe that this can be correctly (and safely) done during development. Overall, the out of box performance in Solaris should meet your workload's requirements. If you are looking into that extra bit of performance, then the Critical Threads Optimization may be what you're looking for.

    Read the article

  • Using device variable by multiple threads on CUDA

    - by ashagi
    I am playing around with cuda. At the moment I have a problem. I am testing a large array for particular responses, and when I get the response, I have to copy the data onto another array. For example, my test array of 5 elements looks like this: [ ][ ][v1][ ][ ][v2] Result must look like this: [v1][v2] The problem is how do I calculate the address of the second array to store the result? All elements of the first array are checked in parallel. I am thinking to declare a device variable int addr = 0. Every time I find a response, I will increment the addr. But I am not sure about that because it means that addr may be accessed by multiple threads at the same time. Will that cause problems? Or will the thread wait until another thread finishes using that variable?

    Read the article

  • C# Abort()ing threads on exit for a Form

    - by Gio Borje
    So far I have this code run when the X button is clicked, but I'm not sure if this is the correct way to terminate threads on a form on exit. Type t = this.GetType(); foreach (PropertyInfo pi in t.GetProperties()) { if (pi.GetType() == typeof(Thread)) { MethodInfo mi = pi.GetType().GetMethod("Abort"); mi.Invoke(null, new object[] {}); } } I keep getting this error: "An attempt has been made to free an RCW that is in use. The RCW is in use on the active thread or another thread. Attempting to free an in-use RCW can cause corruption or data loss."

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >