Search Results

Search found 5092 results on 204 pages for 'kernel tracker'.

Page 187/204 | < Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >

  • Getting Argument Names In Ruby Reflection

    - by Joe Soul-bringer
    I would like to do some fairly heavy-duty reflection in the Ruby programming language. I would like to create a function which would return the names of the arguments of various calling functions higher up the call stack (just one higher would be enough but why stop there?). I could use Kernel.caller go to the file and parse the argument list but that would be ugly and unreliable. The function that I would like would work in the following way: module A def method1( tuti, fruity) foo end def method2(bim, bam, boom) foo end def foo print caller_args[1].join(",") #the "1" mean one step up the call stack end end A.method1 #prints "tuti,fruity" A.method2 #prints "bim, bam, boom" I would not mind using ParseTree or some similar tool for this task but looking at Parsetree, it is not obvious how to use it for this purpose. Creating a C extension like this is another possibility but it would be nice if someone had already done it for me. Edit2: I can see that I'll probably need some kind of C extension. I suppose that means my question is what combination of C extension would work most easily. I don't think caller+ParseTree would be enough by themselves. As far as why I would like to do this goes, rather than saying "automatic debugging", perhaps I should say that I would like to use this functionality to do automatic checking of the calling and return conditions of functions. Say def add x, y check_positive return x + y end Where check_positive would throw an exception if x and y weren't positive (obviously, there would be more to it than that but hopefully this gives enough motivation)

    Read the article

  • Avoiding anemic domain model - a real example

    - by cbp
    I am trying to understand Anemic Domain Models and why they are supposedly an anti-pattern. Here is a real world example. I have an Employee class, which has a ton of properties - name, gender, username, etc public class Employee { public string Name { get; set; } public string Gender { get; set; } public string Username { get; set; } // Etc.. mostly getters and setters } Next we have a system that involves rotating incoming phone calls and website enquiries (known as 'leads') evenly amongst sales staff. This system is quite complex as it involves round-robining enquiries, checking for holidays, employee preferences etc. So this system is currently seperated out into a service: EmployeeLeadRotationService. public class EmployeeLeadRotationService : IEmployeeLeadRotationService { private IEmployeeRepository _employeeRepository; // ...plus lots of other injected repositories and services public void SelectEmployee(ILead lead) { // Etc. lots of complex logic } } Then on the backside of our website enquiry form we have code like this: public void SubmitForm() { var lead = CreateLeadFromFormInput(); var selectedEmployee = Kernel.Get<IEmployeeLeadRotationService>() .SelectEmployee(lead); Response.Write(employee.Name + " will handle your enquiry. Thanks."); } I don't really encounter many problems with this approach, but supposedly this is something that I should run screaming from because it is an Anemic Domain Model. But for me its not clear where the logic in the lead rotation service should go. Should it go in the lead? Should it go in the employee? What about all the injected repositories etc that the rotation service requires - how would they be injected into the employee, given that most of the time when dealing with an employee we don't need any of these repositories?

    Read the article

  • Qt/MFC Migration Framework tool: properly exiting DLL?

    - by User
    I'm using the Qt/MFC Migration Framework tool following this example: http://doc.qt.nokia.com/solutions/4/qtwinmigrate/winmigrate-qt-dll-example.html The dll I build is loaded by a 3rd party MFC-based application. The 3rd party app basically calls one of my exported DLL functions to startup my plugin and another function to shutdown my application. Currently I'm doing nothing in my shutdown function. When I load my DLL in the 3rd party app the startup function is called and my DLL starts successfully and I can see my message box. However if I shutdown my plugin and then try to start it again I get the following error: Debug Error! Program: <my 3rd party app> Module: 4.7.1 File: global\qglobal.cpp Line: 2262 ASSERT failure in QWidget: "Widgets must be created in the GUI thread.", file kernel\qwidget.cpp line 1233 (Press Retry to debug the application) Abort Retry Ignore This makes me think I'm not doing something to properly shutdown my plugin. What do I need to do to shut it down properly? UPDATE: http://doc.qt.nokia.com/solutions/4/qtwinmigrate/winmigrate-walkthrough.html says: The DLL also has to make sure that it can be loaded together with other Qt based DLLs in the same process (in which case a QApplication object will probably exist already), and that the DLL that creates the QApplication object remains loaded in memory to avoid other DLLs using memory that is no longer available to the process. So I wonder if there is some problem where I need to somehow keep the original DLL loaded no matter what?

    Read the article

  • Lightweight alternative to Manual/AutoResetEvent in C#

    - by sweetlilmre
    Hi, I have written what I hope is a lightweight alternative to using the ManualResetEvent and AutoResetEvent classes in C#/.NET. The reasoning behind this was to have Event like functionality without the weight of using a kernel locking object. Although the code seems to work well in both testing and production, getting this kind of thing right for all possibilities can be a fraught undertaking and I would humbly request any constructive comments and or criticism from the StackOverflow crowd on this. Hopefully (after review) this will be useful to others. Usage should be similar to the Manual/AutoResetEvent classes with Notify() used for Set(). Here goes: using System; using System.Threading; public class Signal { private readonly object _lock = new object(); private readonly bool _autoResetSignal; private bool _notified; public Signal() : this(false, false) { } public Signal(bool initialState, bool autoReset) { _autoResetSignal = autoReset; _notified = initialState; } public virtual void Notify() { lock (_lock) { // first time? if (!_notified) { // set the flag _notified = true; // unblock a thread which is waiting on this signal Monitor.Pulse(_lock); } } } public void Wait() { Wait(Timeout.Infinite); } public virtual bool Wait(int milliseconds) { lock (_lock) { bool ret = true; // this check needs to be inside the lock otherwise you can get nailed // with a race condition where the notify thread sets the flag AFTER // the waiting thread has checked it and acquires the lock and does the // pulse before the Monitor.Wait below - when this happens the caller // will wait forever as he "just missed" the only pulse which is ever // going to happen if (!_notified) { ret = Monitor.Wait(_lock, milliseconds); } if (_autoResetSignal) { _notified = false; } return (ret); } } }

    Read the article

  • Android 3.1+ USB as virtual COM port

    - by ZachMc
    I have a third party usb device, that when plugged into a Windows machine, is recognized as a serial device and assigned to the COM 4 port. I can communicate with the device just like I would with a device connected via a serial port. For instance, I can write "abc" serially to the device via the USB connection. I have been searching for a way to do a similar thing in Android. If I try the Usb Host method, and use a UsbManager to open the UsbDevice, I can get one interface, with 2 endpoints. I have tried sending control messages using the method in UsbDeviceConnection, but the method returns -1 for everything (though I don't know what I should use for the parameters of that method). Is there a way to get an OutputStream that I can write to that will send bytes to the USB device? Right now I am looking at recompiling the kernel to include a virtual COM port driver and write some native code to be able to do this. Thanks! Edit: I am using the FTDI serial to USB converter circuit. Is this compatible with Android?

    Read the article

  • How do I use the sed command to remove all lines between 2 phrases (including the phrases themselves

    - by fzkl
    I am generating a log from which I want to remove X startup output which looks like this: X.Org X Server 1.7.6 Release Date: 2010-03-17 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.31-607-imx51 armv7l Ubuntu Current Operating System: Linux nvidia 2.6.33.2 #1 SMP PREEMPT Mon May 31 21:38:29 PDT 2010 armv7l Kernel command line: mem=448M@0M nvmem=64M@448M mem=512M@512M chipuid=097c81c6425f70d7 vmalloc=320M video=tegrafb console=ttyS0,57600n8 usbcore.old_scheme_first=1 tegraboot=nand root=/dev/nfs ip=:::::usb0:on rw tegra_ehci_probe_delay=5000 smp dvfs tegrapart=recovery:1b80:a00:800,boot:2680:1000:800,environment:3780:40:800,system:38c0:2bc00:800,cache:2f5c0:4000:800,userdata:336c0:c840:800 envsector=3080 Build Date: 23 April 2010 05:19:26PM xorg-server 2:1.7.6-2ubuntu7 (Bryce Harrington <[email protected]>) Current version of pixman: 0.16.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Wed Jun 16 19:52:00 2010 (==) Using config file: "/etc/X11/xorg.conf" (==) Using config directory: "/usr/lib/X11/xorg.conf.d" Is there any way to do this without manually checking pattern for each line?

    Read the article

  • pthread condition variables on Linux, odd behaviour.

    - by janesconference
    Hi. I'm synchronizing reader and writer processes on Linux. I have 0 or more process (the readers) that need to sleep until they are woken up, read a resource, go back to sleep and so on. Please note I don't know how many reader processes are up at any moment. I have one process (the writer) that writes on a resource, wakes up the readers and does its business until another resource is ready (in detail, I developed a no starve reader-writers solution, but that's not important). To implement the sleep / wake up mechanism I use a Posix condition value, pthread_cond_t. The clients call a pthread_cond_wait() on the variable to sleep, while the server does a pthread_cond_broadcast() to wake them all up. As the manual says, I surround these two calls with a lock/unlock of the associated pthread mutex. The condition variable and the mutex are initialized in the server and shared between processes through a shared memory area (because I'm not working with threads, but with separate processes) an I'm sure my kernel / syscall support it (because I checked _POSIX_THREAD_PROCESS_SHARED). What happens is that the first client process sleeps and wakes up perfectly. When I start the second process, it blocks on its pthread_cond_wait() and never wakes up, even if I'm sure (by the logs) that pthread_cond_broadcast() is called. If I kill the first process, and launch another one, it works perfectly. In other words, the condition variable pthread_cond_broadcast() seems to wake up only one process a time. If more than one process wait on the very same shared condition variable, only the first one manages to wake up correctly, while the others just seem to ignore the broadcast. Why this behaviour? If I send a pthread_cond_broadcast(), every waiting process should wake up, not just one (and, however, not always the same one).

    Read the article

  • php (rar) i want to rar a folder using rar on Ubuntu (linux) by php (on dedi server) noob

    - by Steve
    hey guyz i want rar (not tar) my folder on my server by using php RAR RAR 3.93 Copyright (c) 1993-2010 Alexander Roshal 15 Mar 2010 Registered to my real name OS Ubuntu Release (Karmic) kernel linux 2.6.32.2-xxxx-grs-ipv4-32 Gnome 2.28.1 latest php an lighthttpd i have tried these things http://php.net/manual/en/function.escapeshellarg.php // may be wrong code http://php.net/manual/en/function.exec.php http://php.net/manual/en/function.shell-exec.php my command (working in ssh and nautilus script) rar a -m0 /where/file/will/saved/file_name.rar /location/ti/data/dir/datafolder php code $log=Shell_exec("rar a -m0 /where/file/will/saved/file_name.rar /location/ti/data/dir/datafolder"); echo $log; one method is left which i don't know how to use and its working on server that is by somefile_to_execute_command.sh i have to execute .sh file from php need to send some variables (command) and i tried this method can rar file with a script named RapidLeech but its rar from only its own files dir only :( but i want to do in different directories. Rapid Leech rar class http://paste2.org/p/791668 i m able run shell command with php (cp(copy),mv(move),ls(directory list),rm(remove aka delete)) but got failed to run rar i gives no output i also tried to given path rar and i used alot commands with php Shell_exec function and working like they work with ssh and i have tried almost 80 % method given on net and failed from last 3days i m over now plz help me i need php script file working plz reply if u have any info n code and experience about rar and this kinda :( problem i m 99% noob just used code mean search Google collect script make my own working thing (for personal use only) n now i m failed to rar folder and file :(( now plz provide me code plz don't talk in technical language because i m just reading my first php book (for dummies :D) mean noob and 0.1 plz help me as much u can thankx

    Read the article

  • Canceling a WSK I/O operation when driver is unloading

    - by eaducac
    I've been learning how to write drivers with the Windows DDK recently. After creating a few test drivers experimenting with system threads and synchronization, I decided to step it up a notch and write a driver that actually does something, albeit something useless. Currently, my driver connects to my other computer using Winsock Kernel and just loops and echoes back whatever I send to it until it gets the command "exit", which causes it to break out of the loop. In my loop, after I call WskReceive() to get some data from the other computer, I use KeWaitForMultipleObjects() to wait for either of two SynchronizationEvents. BlockEvent gets set by my IRP's CompletionRoutine() to let my thread know that it's received some data from the socket. EndEvent gets set by my DriverUnload() routine to tell the thread that it's being unloaded and it needs to terminate. When I send the "exit" command, the thread terminates with no problems, and I can safely unload the driver afterward. If I try to stop the driver while it's still waiting on data from the other computer, however, it blue screens with the error DRIVER_UNLOADED_WITHOUT_CANCELLING_PENDING_OPERATIONS. After I get the EndEvent but before I exit the loop, I've tried canceling the IRP with IoCancelIrp() and completing it with IoCompleteRequest(), but both of those give me DRIVER_IRQL_NOT_LESS_OR_EQUAL errors. I then tried calling WskDisconnect(), hoping that would cause the receive operation to complete, but that took me back to the CANCELLING_PENDING_OPERATIONS error. How do I cancel my pending I/O operation from my WSK socket at the IRQL I'm running at when the driver is unloaded?

    Read the article

  • How can I write a unit test to determine whether an object can be garbage collected?

    - by driis
    In relation to my previous question, I need to check whether a component that will be instantiated by Castle Windsor, can be garbage collected after my code has finished using it. I have tried the suggestion in the answers from the previous question, but it does not seem to work as expected, at least for my code. So I would like to write a unit test that tests whether a specific object instance can be garbage collected after some of my code has run. Is that possible to do in a reliable way ? EDIT I currently have the following test based on Paul Stovell's answer, which succeeds: [TestMethod] public void ReleaseTest() { WindsorContainer container = new WindsorContainer(); container.Kernel.ReleasePolicy = new NoTrackingReleasePolicy(); container.AddComponentWithLifestyle<ReleaseTester>(LifestyleType.Transient); Assert.AreEqual(0, ReleaseTester.refCount); var weakRef = new WeakReference(container.Resolve<ReleaseTester>()); Assert.AreEqual(1, ReleaseTester.refCount); GC.Collect(); GC.WaitForPendingFinalizers(); Assert.AreEqual(0, ReleaseTester.refCount, "Component not released"); } private class ReleaseTester { public static int refCount = 0; public ReleaseTester() { refCount++; } ~ReleaseTester() { refCount--; } } Am I right assuming that, based on the test above, I can conclude that Windsor will not leak memory when using the NoTrackingReleasePolicy ?

    Read the article

  • How to optimize Conway's game of life for CUDA?

    - by nlight
    I've written this CUDA kernel for Conway's game of life: global void gameOfLife(float* returnBuffer, int width, int height) { unsigned int x = blockIdx.x*blockDim.x + threadIdx.x; unsigned int y = blockIdx.y*blockDim.y + threadIdx.y; float p = tex2D(inputTex, x, y); float neighbors = 0; neighbors += tex2D(inputTex, x+1, y); neighbors += tex2D(inputTex, x-1, y); neighbors += tex2D(inputTex, x, y+1); neighbors += tex2D(inputTex, x, y-1); neighbors += tex2D(inputTex, x+1, y+1); neighbors += tex2D(inputTex, x-1, y-1); neighbors += tex2D(inputTex, x-1, y+1); neighbors += tex2D(inputTex, x+1, y-1); __syncthreads(); float final = 0; if(neighbors < 2) final = 0; else if(neighbors 3) final = 0; else if(p != 0) final = 1; else if(neighbors == 3) final = 1; __syncthreads(); returnBuffer[x + y*width] = final; } I am looking for errors/optimizations. Parallel programming is quite new to me and I am not sure if I get how to do it right. The rest of the app is: Memcpy input array to a 2d texture inputTex stored in a CUDA array. Output is memcpy-ed from global memory to host and then dealt with. As you can see a thread deals with a single pixel. I am unsure if that is the fastest way as some sources suggest doing a row or more per thread. If I understand correctly NVidia themselves say that the more threads, the better. I would love advice on this on someone with practical experience.

    Read the article

  • Why does my javascript file sometimes compressed while sometimes not?(IIS Gzip problem)

    - by Kevin Yang
    i enable gzip for javascript file in my iis settings, here 's the corresponding config section. <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files"> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="10" dynamicCompressionLevel="8" /> <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/soap+msbin1" enabled="true" /> <add mimeType="*/*" enabled="false" /> </dynamicTypes> <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/javascript" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="*/*" enabled="false" /> </staticTypes> </httpCompression> currently, when i download my js file, it seems that sometimes server return the gzip one, and sometimes not. i dont know why, and how to debug that. If a file is already gzipped, it should be cached in local disk, and next time someone visit that file again, iis kernel should return the cache gzip file directly without compressing it again. Is that right?

    Read the article

  • Linux C debugging library to detect memory corruptions

    - by calandoa
    When working sometimes ago on an embedded system with a simple MMU, I used to program dynamically this MMU to detect memory corruptions. For instance, at some moment at runtime, the foo variable was overwritten with some unexpected data (probably by a dangling pointer or whatever). So I added the additional debugging code : at init, the memory used by foo was indicated as a forbidden region to the MMU; each time foo was accessed on purpose, access to the region was allowed just before then forbidden just after; a MMU irq handler was added to dump the master and the address responsible of the violation. This was actually some kind of watchpoint, but directly self-handled by the code itself. Now, I would like to reuse the same trick, but on a x86 platform. The problem is that I am very far from understanding how is working the MMU on this platform, and how it is used by Linux, but I wonder if any library/tool/system call already exist to deal with this problem. Note that I am aware that various tools exist like Valgrind or GDB to manage memory problems, but as far as I know, none of these tools car be dynamically reconfigured by the debugged code. I am mainly interested for user space under Linux, but any info on kernel mode or under Windows is also welcome!

    Read the article

  • Building a custom Linux Live CD

    - by Mike Heinz
    Can anyone point me to a good tutorial on creating a bootable Linux CD from scratch? I need help with a fairly specialized problem: my firm sells an expansion card that requires custom firmware. Currently we use an extremely old live CD image of RH7.2 that we update with current firmware. Manufacturing puts the cards in a machine, boots off the CD, the CD writes the firmware, they power off and pull the cards. Because of this cycle, it's essential that the CD boot and shut down as quickly as possible. The problem is that with the next generation of cards, I have to update the CD to a 2.6 kernel. It's easy enough to acquire a pre-existing live CD - but those all are designed for showing off Linux on the desktop - which means they take forever to boot. Can anyone fix me up with a current How-To? Update: So, just as a final update for anyone reading this later - the tool I ended up using was "livecd-creator". My reason for choosing this tool was that it is available for RedHat-based distributions like CentOs, Fedora and RHEL - which are all distributions that my company supports already. In addition, while the project is very poorly documented it is extremely customizable. I was able to create a minimal LiveCD and edit the boot sequence so that it booted directly into the firmware updater instead of a bash shell. The whole job would have only taken an hour or two if there had been a README explaining the configuration file!

    Read the article

  • erlide, which eclipse/which packages?

    - by KevinDTimm
    I have downloaded eclipse 3.4 (java version) for MacOSX (carbon). I have tried to 'update' to the erlide, but see many (duplicated) options (many erlide, options that say 'only for erl SDK updates', etc.) Sometimes I get 403 errors when attempting to access http://erlide.org/update and http://erlide.sourceforge.net/update. Finally, when I get some set of options installed, I either get errors like : Loading of /Users/kevindtimm/Documents/eclipse-java-ganymede-SR2-macosx-carbon/eclipse/plugins/org.erlide.kernel.common_0.8.1.201005250801/ebin/erlide_kernel_common.beam failed: badfile (hello_world@ktmac)1> =ERROR REPORT==== 24-Nov-2010::19:17:32 === beam/beam_load.c(1768): Error loading function erlide_kernel_common:monitor/0: op put_string u u x: please re-compile this module with an R14B compiler or, when I've done different installations of erlide, I get no response in the console to : hello:hello(). Does anybody have a good reference for how to load this plug-in and which items I should install? -module(hello). -export([hello/0]). hello() -> io:write("Hello World\n"). [edit] I have installed eclipse 3.6 (c++) as requested below, and the following code still can't find hello:hello(). %%file_comment -module(hello). %% %% Include files %% %% %% Exported Functions %% -export([hello/0]). %% %% API Functions %% %% %% Local Functions %% hello() -> io:write("Hello World\n"). [/edit]

    Read the article

  • OpenCL Matrix Multiplication - Getting wrong answer

    - by Yash
    here's a simple OpenCL Matrix Multiplication kernel which is driving me crazy: __kernel void matrixMul( __global int* C, __global int* A, __global int* B, int wA, int wB){ int row = get_global_id(1); //2D Threas ID x int col = get_global_id(0); //2D Threas ID y //Perform dot-product accumulated into value int value; for ( int k = 0; k < wA; k++ ){ value += A[row*wA + k] * B[k*wB+col]; } C[row*wA+col] = value; //Write to the device memory } Where (inputs) A = [72 45 75 61] B = [26 53 46 76] Output I am getting: C = [3942 7236 3312 5472] But the output should be: C = [3943 7236 4756 8611] The problem I am facing here is that for any dimension array the elements of the first row of the resulting matrix is correct. The elements of all the other rows of the resulting matrix is wrong. By the way I am using pyopencl. I don't know what I mistake I am doing here. I have spent the entire day with no luck. Please help me with this

    Read the article

  • How are two-dimensional arrays formatted in memory?

    - by Chris Cooper
    In C, I know I can dynamically allocate a two-dimensional array on the heap using the following code: int** someNumbers = malloc(arrayRows*sizeof(int*)); for (i = 0; i < arrayRows; i++) { someNumbers[i] = malloc(arrayColumns*sizeof(int)); } Clearly, this actually creates a one-dimensional array of pointers to a bunch of separate one-dimensional arrays of integers, and "The System" can figure you what I mean when I ask for: someNumbers[4][2]; But when I statically declare a 2D array, as in the following line...: int someNumbers[ARRAY_ROWS][ARRAY_COLUMNS]; ...does a similar structure get created on the stack, or is it of another form completely? (i.e. is it a 1D array of pointers? If not, what is it, and how do references to it get figured out?) Also, when I said, "The System," what is actually responsible for figuring that out? The kernel? Or does the C compiler sort it out while compiling?

    Read the article

  • Can I prevent a Linux user space pthread yielding in critical code?

    - by KermitG
    I am working on an user space app for an embedded Linux project using the 2.6.24.3 kernel. My app passes data between two file nodes by creating 2 pthreads that each sleep until a asynchronous IO operation completes at which point it wakes and runs a completion handler. The completion handlers need to keep track of how many transfers are pending and maintain a handful of linked lists that one thread will add to and the other will remove. // sleep here until events arrive or time out expires for(;;) { no_of_events = io_getevents(ctx, 1, num_events, events, &timeout); // Process each aio event that has completed or thrown an error for (i=0; i<no_of_events; i++) { // Get pointer to completion handler io_complete = (io_callback_t) events[i].data; // Get pointer to data object iocb = (struct iocb *) events[i].obj; // Call completion handler and pass it the data object io_complete(ctx, iocb, events[i].res, events[i].res2); } } My question is this... Is there a simple way I can prevent the currently active thread from yielding whilst it runs the completion handler rather than going down the mutex/spin lock route? Or failing that can Linux be configured to prevent yielding a pthread when a mutex/spin lock is held?

    Read the article

  • What can a company possibly gain by making Android phones hard to root?

    - by Chinmay Kanchi
    As someone who recently got a HTC Hero, I had to jump through several hoops to get root access on the phone to install custom firmware. Now, Android is open-source and fairly easy to build and hack on an emulator. It seems to be against the spirit of open-source to lock down a phone so you can't hack the phone itself. Now, often, there are understandable (though not always justifiable) reasons for locking a device down. For example, it might have proprietary software on it or you might want to retain control of the platform. However, Android by its open-source nature makes such concerns moot. Everyone and their dog has access to the userland code, and HTC is forced by the GPL to release kernel sources for each of their devices. So, I fail to see any motivation for alienating the hackers, when there is no possible benefit (in my mind) to be had from doing this. Any idea why a company would want to do this? Is it just short-sightedness or am I missing possible commercial implications of this?

    Read the article

  • About Interview structure for test automation lab developers

    - by Ikaso
    Hi, I am interviewing new applicants for a team that is doing test automation on our company product(s). The team is composed of junior software developers and a team leader. The product runs on windows and has both managed and unmanaged parts. The test automation is done on both client side (user mode and kernel mode) and server side (IIS, Windows Services, backend). We are doing mainly intergration tests and black box tests. I am trying to figure out how to organize my interview. My overall idea is to ask about a project they have done, then ask some technical questions (multithreading, GC, design patterns) and one programming question. Please note that there is another interview done before me with 2 programming questions. My programming question is rather simple (for example: reversing a singly-linked linked list). My coworkers think that my questions will not find good developers since my questions are rather simple and well known, but so far most of the applicants fail those questions. My questions are: Should I change the structure of my interview for this kind of job? What questions do you ask to figure our if the applicant is test oriented? (Maybe I should provide a buggy implementation of a problem and let them find the bugs and then ask them about what tests they would have done) Regards,

    Read the article

  • Time gaps between host clEnqueue_xxx calls

    - by dialer
    Consider these OpenCL calls (3 memcpy DtoH, 4313 cl_float elements each): clEnqueueReadBuffer(CommandQueue, SpectrumAbsMem, CL_FALSE, 0, SpectrumMemSize, SpectrumAbs, 0, NULL, NULL); clEnqueueReadBuffer(CommandQueue, SpectrumReMem, CL_FALSE, 0, SpectrumMemSize, SpectrumRe, 0, NULL, NULL); clEnqueueReadBuffer(CommandQueue, SpectrumImMem, CL_FALSE, 0, SpectrumMemSize, SpectrumIm, 0, NULL, NULL); When I analyze these with the NVIDIA visual profiler, I see that the actual memcpy operation only takes 8 us, but there is a significant gap of around 130 us after each memcpy. I'm already using the supposedly asynchronous method (the CL_FALSE in the argument list). When I use only one operation, but with three times the size, the operation is way faster. Why is the time gap between the actual memcpy operations so huge, whereas the gap between the kernel execution (exactly before these three operations) and the first memcpy is only 7us? Can I get rid of it, or do I need to accumulate more data before starting a memcpy? If so, is there a convenient way how I could combine mutliple arrays into a single contiguous block of memory, but still have a cl_mem object as a separate device memory pointer to each section?

    Read the article

  • what can cause large discrepancy between minor GC time and total pause time?

    - by cxcg
    We have a latency-sensitive application, and are experiencing some GC-related pauses we don't fully understand. We occasionally have a minor GC that results in application pause times that are much longer than the reported GC time itself. Here is an example log snippet: 485377.257: [GC 485378.857: [ParNew: 105845K-621K(118016K), 0.0028070 secs] 136492K-31374K(1035520K), 0.0028720 secs] [Times: user=0.01 sys=0.00, real=1.61 secs] Total time for which application threads were stopped: 1.6032830 seconds The total pause time here is orders of magnitude longer than the reported GC time. These are isolated and occasional events: the immediately preceding and succeeding minor GC events do not show this large discrepancy. The process is running on a dedicated machine, with lots of free memory, 8 cores, running Red Hat Enterprise Linux ES Release 4 Update 8 with kernel 2.6.9-89.0.1EL-smp. We have observed this with (32 bit) JVM versions 1.6.0_13 and 1.6.0_18. We are running with these flags: -server -ea -Xms512m -Xmx512m -XX:+UseConcMarkSweepGC -XX:NewSize=128m -XX:MaxNewSize=128m -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCApplicationStoppedTime -XX:-TraceClassUnloading Can anybody offer some explanation as to what might be going on here, and/or some avenues for further investigation?

    Read the article

  • Strange results while measuring delta time on Linux

    - by pachanga
    Folks, could you please explain why I'm getting very strange results from time to time using the the following code: #include <unistd.h> #include <sys/time.h> #include <time.h> #include <stdio.h> int main() { struct timeval start, end; long mtime, seconds, useconds; while(1) { gettimeofday(&start, NULL); usleep(2000); gettimeofday(&end, NULL); seconds = end.tv_sec - start.tv_sec; useconds = end.tv_usec - start.tv_usec; mtime = ((seconds) * 1000 + useconds/1000.0) + 0.5; if(mtime > 10) printf("WTF: %ld\n", mtime); } return 0; } (You can compile and run it with: gcc test.c -o out -lrt && ./out) What I'm experiencing is sporadic big values of mtime variable almost every second or even more often, e.g: $ gcc test.c -o out -lrt && ./out WTF: 14 WTF: 11 WTF: 11 WTF: 11 WTF: 14 WTF: 13 WTF: 13 WTF: 11 WTF: 16 How can this be possible? Is it OS to blame? Does it do too much context switching? But my box is idle( load average: 0.02, 0.02, 0.3). Here is my Linux kernel version: $ uname -a Linux kurluka 2.6.31-21-generic #59-Ubuntu SMP Wed Mar 24 07:28:56 UTC 2010 i686 GNU/Linux

    Read the article

  • What can cause a spontaneous EPIPE error without either end calling close() or crashing?

    - by Hongli
    I have an application that consists of two processes (let's call them A and B), connected to each other through Unix domain sockets. Most of the time it works fine, but some users report the following behavior: A sends a request to B. This works. A now starts reading the reply from B. B sends a reply to A. The corresponding write() call returns an EPIPE error, and as a result B close() the socket. However, A did not close() the socket, nor did it crash. A's read() call returns 0, indicating end-of-file. A thinks that B prematurely closed the connection. Users have also reported variations of this behavior, e.g.: A sends a request to B. This works partially, but before the entire request is sent A's write() call returns EPIPE, and as a result A close() the socket. However B did not close() the socket, nor did it crash. B reads a partial request and then suddenly gets an EOF. The problem is I cannot reproduce this behavior locally at all. I've tried OS X and Linux. The users are on a variety of systems, mostly OS X and Linux. Things that I've already tried and considered: Double close() bugs (close() is called twice on the same file descriptor): probably not as that would result in EBADF errors, but I haven't seen them. Increasing the maximum file descriptor limit. One user reported that this worked for him, the rest reported that it did not. What else can possibly cause behavior like this? I know for certain that neither A nor B close() the socket prematurely, and I know for certain that neither of them have crashed because both A and B were able to report the error. It is as if the kernel suddenly decided to pull the plug from the socket for some reason.

    Read the article

  • Why is a non-blocking TCP connect() occasionally so slow on Linux?

    - by pts
    I was trying to measure the speed of a TCP server I'm writing, and I've noticed that there might be a fundamental problem of measuring the speed of the connect() calls: if I connect in a non-blocking way, connect() operations become very slow after a few seconds. Here is the example code in Python: #! /usr/bin/python2.4 import errno import os import select import socket import sys def NonBlockingConnect(sock, addr): while True: try: return sock.connect(addr) except socket.error, e: if e.args[0] not in (errno.EINPROGRESS, errno.EALREADY): raise os.write(2, '^') if not select.select((), (sock,), (), 0.5)[1]: os.write(2, 'P') def InfiniteClient(addr): while True: sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM, 0) sock.setblocking(0) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) # sock.connect(addr) NonBlockingConnect(sock, addr) sock.close() os.write(2, '.') def InfiniteServer(server_socket): while True: sock, addr = server_socket.accept() sock.close() server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM, 0) server_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) server_socket.bind(('127.0.0.1', 45454)) server_socket.listen(128) if os.fork(): # Parent. InfiniteServer(server_socket) else: addr = server_socket.getsockname() server_socket.close() InfiniteClient(addr) With NonBlockingConnect, most connect() operations are fast, but in every few seconds there happens to be one connect() operation which takes at least 2 seconds (as indicated by 5 consecutive P letters on the output). By using sock.connect instead of NonBlockingConnect all connect operations seem to be fast. How is it possible to get rid of these slow connect()s? I'm running Ubuntu Karmic desktop with the standard PAE kernel: Linux narancs 2.6.31-20-generic-pae #57-Ubuntu SMP Mon Feb 8 10:23:59 UTC 2010 i686 GNU/Linux

    Read the article

< Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >