Search Results

Search found 9012 results on 361 pages for 'hardware detection'.

Page 282/361 | < Previous Page | 278 279 280 281 282 283 284 285 286 287 288 289  | Next Page >

  • 2 fundamental questions for the Androgurus ...Can someone guide me

    - by Saul Carpenter
    I have'nt plunged into Android Development as yet though Java Classes C++ all that is not new to me. Here are the questions folks. Appreciated any help on these : - If I need to develop test and deploy Android Apps do I NEED AN ANDROID Hardware device or is there a software Android Simulator like VMWARE or Virtual PC , where I can emulate the results.If there is such can you point me more info I have a Netbook ( the Chinese Ipad Clone ) running Android that has only Wi-Fi for the present. Is it possible to add the following features via the spare USB Port --- a USB Based 56K Modem : Are there Android platform H/W Drivers. --- a USB based RJ45 ( Ethernet LAN LandLine connection ) Adapter :Are there Android platform H/W Drivers. Please advise Thanks Saul

    Read the article

  • How do I regenerate statistics in Openx?

    - by Martin Bauer
    ue to faulty hardware, statistics generated over a 2 week period were significantly higher than normal (10000 times higher than normal). After moving the application to a new server, the problem rectified itself. The issue I have is that there are 2 weeks of stats that are clearly wrong. I have checked the raw impressions table for the affected fortnight and it seems to be correct (ie. stats per banner per day match the average for the previous month). Looking at the intermediate & summary impressions tables, the values are inflated. I understand from the openx forum (http://forum.openx.org/index.php?s=7796fd9dae40e020a010773746f3ada9&showtopic=503424297) it's possible to regenerate stats from the raw data but it will only regenerate stats per hour, meaning regenerating stats for 2 weeks would be very time consuming. Is there another, more efficient way to regenerate the stats from the raw data for the affected fortnight?

    Read the article

  • Stream a continously growing file over tcp/ip

    - by Grinner
    Hello, I have a project I'm working on, where a piece of Hardware is producing output that is continuously being written into a textfile. What I need to do is to stream that file as it's being written over a simple tcp/ip connection. I'm currently trying to that through simple netcat, but netcat only sends the part of the file that is written at the time of execution. It doesn't continue to send the rest. Right now I have a server listening to netcat on port 9000 (simply for test-purposes): netcat -l 9000 And the send command is: netcat localhost 9000 < c:\OUTPUTFILE So in my understanding netcat should actually be streaming the file, but it simply stops once everything that existed at the beginning of the execution has been sent. It doesn't kill the connection, but simply stops sending new data. How do I get it to stream the data continuously? Thanks for any help!

    Read the article

  • Mapping of memory addresses to physical modules in Windows XP

    - by Josef Grahn
    I plan to run 32-bit Windows XP on a workstation with dual processors, based on Intel's Nehalem microarchitecture, and triple channel RAM. Even though XP is limited to 4 GB of RAM, my understanding is that it will function with more than 4 GB installed, but will only expose 4 GB (or slightly less). My question is: Assuming that 6 GB of RAM is installed in six 1 GB modules, which physical 4 GB will Windows actually map into its address space? In particular: Will it use all six 1 GB modules, taking advantage of all memory channels? (My guess is yes, and that the mapping to individual modules within a group happens in hardware.) Will it map 2 GB of address space to each of the two NUMA nodes (as each processor has it's own memory interface), or will one processor get fast access to 3 GB of RAM, while the other only has 1 GB? Thanks!

    Read the article

  • Header file-name as argument

    - by Alphaneo
    Objective: I have a list of header files (about 50 of them), And each header-file has few arrays with constant elements. I need to write a program to count the elements of the array. And create some other form of output (which will be used by the hardware group). My solution: I included all the 50 odd files and wrote an application. And then I dumped all the elements of the array into the specified format. My environment: Visual Studio V6, Windows XP My problem: Each time there is a new set of Header files, I am now changing the VC++ project settings to point to the new set of header files, and then rebuild. My question: A bit in-sane though, Is there any way to mention the header from some command line arguments or something? I just want to avoid re-compiling the source every time...

    Read the article

  • Doing 64 bit manipulation using 32 bit data in Fixed point arithmetic using C.

    - by Viks
    Hi, I am stuck with a problem. I am working on a hardware which only does support 32 bit operations. sizeof(int64_t) is 4. Sizeof(int) is 4. and I am porting an application which assumes size of int64_t to be 8 bytes. The problem is it has this macro BIG_MULL(a,b) ( (int64_t)(a) * (int64_t)(b) 23) The result is always a 32 bit integer but since my system doesn't support 64 bit operation, it always return me the LSB of the operation, rounding of all the results making my system crash. Can someone help me out? Regards, Vikas Gupta

    Read the article

  • OpenLayers, Layers: Tiled vs. single tile

    - by Chau
    Each time we add a new layer to our OpenLayers based website (data provided primarily by a GeoServer server), we discuss whether to use a single-tile or a tiled approach. Some of the parameters we evaluate are the following: Using the tiled approach we get: Slow but continuous buildup of the viewport Lots of small images Client side caching possibilities Blocking of the loading pipeline (6 requests at a time) Jerky feeling when navigating during load Using the single-tile approach we get: Smoother feeling when navigating during load Time delay before layer is loaded One large image for each layer No caching of the single tile We have a lot of data editing in the layers, thus a tile-cache might not be that efficient. Are there any best-practices when it comes to tiling? Progressing towards infinitely fast hardware and unlimited data connections, the discussion becomes irrelevant, but what configuration do you percieve as the most user-pleasing?

    Read the article

  • Implement Semi-Round-Robin file which can be expanded and saved on demand

    - by ircmaxell
    Ok, that title is going to be a little bit confusing. Let me try to explain it a little bit better. I am building a logging program. The program will have 3 main states: Write to a round-robin buffer file, keeping only the last 10 minutes of data. Write to a buffer file, ignoring the time (record all data). Rename entire buffer file, and start a new one with the past 10 minutes of data (and change state to 1). Now, the use case is this. I have been experiencing some network bottlenecks from time to time in our network. So I want to build a system to record TCP traffic when it detects the bottleneck (detection via Nagios). However by the time it detects the bottlenecking, most of the useful data has already been transmitted. So, what I'd like is to have a deamon that runs something like dumpcap all the time. In normal mode, it'll only keep the past 10 minutes of data (Since there's no point in keeping a boat load of data if it's not needed). But when Nagios alerts, I will send a signal in the deamon to store everything. Then, when Naigos recovers it will send another signal to stop storing and flush the buffer to a save file. Now, the problem is that I can't see how to cleanly store a rotating 10 minutes of data. I could store a new file every 10 minutes and delete the old ones if in mode 1. But that seems a bit dirty to me (especially when it comes to figuring out when the alert happened in the file). Ideally, the file that was saved should be such that the alert is always at the 10:00 mark in the file. While that is possible with new files every 10 minutes, it seems like a bit dirty to "repair" the files to that point. Any ideas? Should I just do a rotating file system and combine them into 1 at the end (doing quite a bit of post-processing)? Is there a way to implement the semi-round-robin file cleanly so that there is no need for any post-processing? Thanks Oh, and the language doesn't matter as much at this stage (I'm leaning towards Python, but have no objection to any other language. It's less of an issue than the overall design)...

    Read the article

  • Server Development Tool?

    - by aloneguid
    Hi, For my programming tasks I use about 2-3 remote servers to deploy and run my code against different conditions. This cannot be emulated locally as the server configuration requires powerful hardware. Most of time I need to stop service, update binareis, start service, view logs in realtime, download logs. Currently I'm doing this manually and over time this becomes a real pain in the ass, especially because the environment is not ideal in terms of network badwidth, reliability etc. I just wonder if someone from server programmers have the similar problems and how do you bear with them. Any special tools/hints/secrets?

    Read the article

  • High-Performance In-Browser Networking

    - by Jon Purdy
    (Similar in spirit to but different in practice from this question.) Is there any cross-browser-compatible, in-browser technology that allows a high-performance perstistent network connection between a server application and a client written in, say, Javascript? Think XmlHttpRequest on caffeine. I am working on a visualisation system that's restricted to at most a few users at once, and the server is pretty robust, so it can handle as much as it needs to. I would like to allow the client to have access to video streamed from the server at a minimum of about 20 frames per second, regardless of what their graphics hardware capabilities are. Simply put: is this doable without resorting to Flash or Java?

    Read the article

  • How to profile the execution of an OSGi deployment?

    - by Jaime Soriano
    I'm starting the development of an OSGi bundle for an application that will be deployed in a device with some hardware limitations. I'd like to know how could I profile the execution of that bundle to be always sure that it's going to fit with its dependencies in the final device. It would be nice to have a profiler to know how much memory is each bundle using, to localize bottle necks and to compare different implementations of the same service. Is there any profiler for OSGi deployments or should I use a general Java profiler? For developing I'm using Pax runner with Apache felix to run the bundle and maven to manage project dependencies and building.

    Read the article

  • How to write a linter?

    - by jbdavid
    In my day job I, and others on my team write a lot of hardware models in Verilog-AMS, a language supported primarily by commercial vendors and a few opensource simulator projects. One thing that would make supporting each others code more helpful would be a LINTER that would check our code for common problems and assist with enforcing a shared code formatting style. I of course want to be able to add my own rules and, after I prove their utility to myself, promote them to the rest of the team.. I don't mind doing the work that has to be done, but of course also want to leverage the work of other existing projects. Does having the allowed language syntax in a yacc or bison format give me a leg up? or should I just suck each language statement into a perl string, and use pattern matching to find the things I don't like? (most syntax and compilation errors are easily caught by the commercial tools.. but we have some of our own extentions.)

    Read the article

  • How to utilize my computation resources.

    - by carter-boater
    Hi all, I wrote a program to solve a complicated problem. This program is just a c# console application and doesn't do console.write until the computation part is finished, so output won't affect the performance. The program is like this: static void Main(string[] args) { Thread WorkerThread = new Thread(new ThreadStart(Run), StackSize); WorkerThread.Priority = ThreadPriority.Highest; WorkerThread.Start(); Console.WriteLine("Worker thread is runing..."); WorkerThread.Join(); } Now it takes 3 minute to run, when I open my task manager, I see it only take 12% of the cpu time. I actually have a i7 intel cpu with 6G three channel DDR3 memory. I am wondering how I can improve the utilization of my hardware. Thanks a lot

    Read the article

  • AVFoundation buffer comparison to a saved image

    - by user577552
    Hi, I am a long time reader, first time poster on StackOverflow, and must say it has been a great source of knowledge for me. I am trying to get to know the AVFoundation framework. What I want to do is save what the camera sees and then detect when something changes. Here is the part where I save the image to a UIImage : if (shouldSetBackgroundImage) { CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); // Create a bitmap graphics context with the sample buffer data CGContextRef context = CGBitmapContextCreate(rowBase, bufferWidth, bufferHeight, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); // Create a Quartz image from the pixel data in the bitmap graphics context CGImageRef quartzImage = CGBitmapContextCreateImage(context); // Free up the context and color space CGContextRelease(context); CGColorSpaceRelease(colorSpace); // Create an image object from the Quartz image UIImage * image = [UIImage imageWithCGImage:quartzImage]; [self setBackgroundImage:image]; NSLog(@"reference image actually set"); // Release the Quartz image CGImageRelease(quartzImage); //Signal that the image has been saved shouldSetBackgroundImage = NO; } and here is the part where I check if there is any change in the image seen by the camera : else { CGImageRef cgImage = [backgroundImage CGImage]; CGDataProviderRef provider = CGImageGetDataProvider(cgImage); CFDataRef bitmapData = CGDataProviderCopyData(provider); char* data = CFDataGetBytePtr(bitmapData); if (data != NULL) { int64_t numDiffer = 0, pixelCount = 0; NSMutableArray * pointsMutable = [NSMutableArray array]; for( int row = 0; row < bufferHeight; row += 8 ) { for( int column = 0; column < bufferWidth; column += 8 ) { //we get one pixel from each source (buffer and saved image) unsigned char *pixel = rowBase + (row * bytesPerRow) + (column * BYTES_PER_PIXEL); unsigned char *referencePixel = data + (row * bytesPerRow) + (column * BYTES_PER_PIXEL); pixelCount++; if ( !match(pixel, referencePixel, matchThreshold) ) { numDiffer++; [pointsMutable addObject:[NSValue valueWithCGPoint:CGPointMake(SCREEN_WIDTH - (column/ (float) bufferHeight)* SCREEN_WIDTH - 4.0, (row/ (float) bufferWidth)* SCREEN_HEIGHT- 4.0)]]; } } } numberOfPixelsThatDiffer = numDiffer; points = [pointsMutable copy]; } For some reason, this doesn't work, meaning that the iPhone detects almost everything as being different from the saved image, even though I set a very low threshold for detection in the match function... Do you have any idea of what I am doing wrong?

    Read the article

  • Automatically find compiler options for fastest exe on given machine?

    - by dehmann
    Is there a method to automatically find the best compiler options (on a given machine), which result in the fastest possible executable? Naturally, I use g++ -O3, but there are additional flags that may make the code run faster, e.g. -ffast-math and others, some of which are hardware-dependent. Does anyone know some code I can put in my configure.ac file (GNU autotools), so that the flags will be added to the Makefile automatically by the ./configure command? In addition to automatically determining the best flags, I would be interested in some useful compiler flags that are good to use as a default for most optimized executables.

    Read the article

  • What one-time-password devices are compatible with mod_authn_otp?

    - by netvope
    mod_authn_otp is an Apache web server module for two-factor authentication using one-time passwords (OTP) generated via the HOTP/OATH algorithm defined in RFC 4226. The developer's has listed only one compatible device (the Authenex's A-Key 3600) on their website. If a device is fully compliant with the standard, and it allows you to recover the token ID, it should work. However, without testing, it's hard to tell whether a device is fully compliant. Have you ever tried other devices (software or hardware) with mod_authn_otp (or other open source server-side OTP program)? If yes, please share your experience :)

    Read the article

  • fast algorithm implementation to sort very small set

    - by aaa
    hello. This is the problem I ran into long time ago. I thought I may ask your for your ideas. assume I have very small set of numbers (integers), 4 or 8 elements, that need to be sorted, fast. what would be the best approach/algorithm? my approach was to use the max/min functions. I guess my question pertains more to implementation, rather than type of algorithm. At this point it becomes somewhat hardware dependent , so let us assume Intel 64-bit processor with SSE3 . Thanks

    Read the article

  • [Ubuntu] How can i log-in to Ubuntu using USB-serial console (rs232) ?

    - by marc
    Welcome, How can enable remote terminal login into Ubuntu 9.10 using usb-serial terminal ? I got created device ttyUSB0 and i want allow to log-in using hyper-terminal. I found some resources but they are related to real! hardware rs232 ports, i can't find any information about USB converter. Right now i have established connection between that usb-serial port and my laptop (i can send text writing to port cp sometext.txt /dev/ttyUSB0 and read using hyperterminal). Any idea ? Regards

    Read the article

  • How to simulate different CPU frequency and limit RAM

    - by user351103
    Hi I have to build a simulator with C#. This simulator should be able to run a second thread with configureable CPU speed and limited RAM size, e.g. 144MHz and 50 MB. Of course I know that a simulator can never be as accurate as the real hardware. But I try to get almost similar performance. At the moment I'm thinking about creating a thread which I will stop/sleep from time to time. Depending on the desired CPU speed the simulator should adjust the sleep time of this thread and therefore simulate different cpu frequency. To measure the achieved speed I though about using PerformanceCounters. But with this approach I have the problem that I don't know how to limit the RAM size the thread could use. Do you have any ideas how to realize such a simulator? Thanks in advance!!

    Read the article

  • Is there a Novatel Wireless Modem Emulator or something similar?

    - by David Brown
    Novatel Wireless provides their NovaCore SDK for developers wishing to interface with their line of modems. I'm currently developing an open source managed wrapper for it, but I'm having difficulties with testing. I own a Novatel MiFi and have mobile broadband service through Sprint, but that can only get me so far. The device is already activated, thus I can't test the network activation features of the NovaCore SDK. There are also certain features only available for HSPA modems, which I am not able to get in my area. Is there an emulator capable of emulating a Novatel Wireless modem so that I can test my library without physical hardware and an actual data connection? If not, do you have any other suggestions that might help in this situation? I've contacted Novatel Wireless via email and their developer forum, but have not received a response. Thanks!

    Read the article

  • Exceptions & Interrupts

    - by Betamoo
    When I was searching for a distinction between Exceptions and Interrupts, I found this question Interrupts and exceptions on SO... Some answers there were not suitable (at least for assembly level): "Exception are software-version of an interrupt" But there exist software interrupts!! "Interrupts are asynchronous but exceptions are synchronous" Is that right? "Interrupts occur regularly" "Interrupts are hardware implemented trap, exceptions are software implemented" Same as above! I need to find if some of these answers were right , also I would be grateful if anyone could provide a better answer... Thanks!

    Read the article

  • PPP connection with RAS dialer in C++

    - by user312054
    I have a windows mobile application that is using Windows CE 5.0. I have been informed by the people supplying the hardware for the unit that I need to create a socket, which I have done successfully, and then dial out to the internet with a PPP connection with a RAS dialer connection. Our old code uses an APN to dial out so I need to create the above connection with an APN. I am having trouble finding examples of this. Can someone point me to some examples of this situation?

    Read the article

  • How to sort my paws?

    - by Ivo Flipse
    In my previous question I got an excellent answer that helped me detect where a paw hit a pressure plate, but now I'm struggling to link these results to their corresponding paws: I manually annotated the paws (RF=right front, RH= right hind, LF=left front, LH=left hind). As you can see there's clearly a pattern repeating pattern and it comes back in aknist every measurement. Here's a link to a presentation of 6 trials that were manually annotated. My initial thought was to use heuristics to do the sorting, like: There's a ~60-40% ratio in weight bearing between the front and hind paws; The hind paws are generally smaller in surface; The paws are (often) spatially divided in left and right. However, I’m a bit skeptical about my heuristics, as they would fail on me as soon as I encounter a variation I hadn’t thought off. They also won’t be able to cope with measurements from lame dogs, whom probably have rules of their own. Furthermore, the annotation suggested by Joe sometimes get's messed up and doesn't take into account what the paw actually looks like. Based on the answers I received on my question about peak detection within the paw, I’m hoping there are more advanced solutions to sort the paws. Especially because the pressure distribution and the progression thereof are different for each separate paw, almost like a fingerprint. I hope there's a method that can use this to cluster my paws, rather than just sorting them in order of occurrence. So I'm looking for a better way to sort the results with their corresponding paw. For anyone up to the challenge, I have pickled a dictionary with all the sliced arrays that contain the pressure data of each paw (bundled by measurement) and the slice that describes their location (location on the plate and in time). To clarfiy: walk_sliced_data is a dictionary that contains ['ser_3', 'ser_2', 'sel_1', 'sel_2', 'ser_1', 'sel_3'], which are the names of the measurements. Each measurement contains another dictionary, [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] (example from 'sel_1') which represent the impacts that were extracted. Also note that 'false' impacts, such as where the paw is partially measured (in space or time) can be ignored. They are only useful because they can help recognizing a pattern, but won't be analyzed. And for anyone interested, I’m keeping a blog with all the updates regarding the project!

    Read the article

  • How to write to the OpenGL Depth Buffer

    - by Mikepote
    I'm trying to implement an old-school technique where a rendered background image AND preset depth information is used to occlude other objects in the scene. So for instance if you have a picture of a room with some wires hanging from the ceiling in the foreground, these are given a shallow depth value in the depthmap, and when rendered correctly, allows the character to walk "behind" the wires but in front of other objects in the room. So far I've tried creating a depth texture using: glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, Image.GetWidth(), Image.GetHeight(), 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, pixels); Then just binding it to a quad and rendering that over the screen, but it doesnt write the depth values from the texture. I've also tried: glDrawPixels(Image.GetWidth(), Image.GetHeight(), GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, pixels); But this slows down my framerate to about 0.25 fps... I know that you can do this in a pixelshader by setting the gl_fragDepth to a value from the texture, but I wanted to know if I could achieve this with non-pixelshader enabled hardware?

    Read the article

  • What are the Worst Software Project Failures Ever?

    - by Warren P
    Is there a good list of "worst software project failures ever" in the history of software development? For example in Canada a "gun registry" project spent around two billion dollars. (http://en.wikipedia.org/wiki/Gun_registry). This is of course, insane, even if the final product "sort of worked". I have heard of an FBI Case file system which there have been several attempts to rewrite, all of them so far, failures. There is a book on the subject (Software Runaways). There doesn't seem to be be a software "boondoggle" list or "fiasco" list on Wikipedia that I can see. (Update: Therac-25 would be the 'winner' of this question, except that I was internally thinking more of Software projects that had as their deliverable, mainly software, as opposed to firmware projects like Therac-25, where the hardware and firmware together are capable of killing people. In terms of pure software monetary debacles, which was my intended question, there are several contenders.)

    Read the article

< Previous Page | 278 279 280 281 282 283 284 285 286 287 288 289  | Next Page >