Search Results

Search found 22000 results on 880 pages for 'worker process'.

Page 554/880 | < Previous Page | 550 551 552 553 554 555 556 557 558 559 560 561  | Next Page >

  • BumbleBee / getting Nvidia 310.19 Drivers to Work | 12.10

    - by Charles Mynard
    Asus U31JG - A1 Laptop: -Intel Core i3 -Nvidia 415m / Intel HD Graphics ( Optimus Technology ) Hello, I have been trying for about 3 weeks now to get any kind of Nvidia driver to work, and now am determined to get the 310.19 driver to work. I have tried numerous times of which either nothing happened or my interface ( menu bars, the top and left bar on desktop, and the ability to close a window) would disappear. Apparently I'm not making the connection on how to get these to install properly. I have tried numerous other posts and websites and attempted "bumblebee" to no avail. I am wondering if anyone can write a step by step guide of commands that I need to run in terminal to get this to work. I've had to reinstall 12.10, so if you could walk me through the process of getting the drivers downloaded and installed, I would greatly appreciate it. I barely know what I'm doing, and this is quite a turn off for someone new to Ubuntu, I really want to enjoy it but this is preventing me from committing. Thank you in advance, and I apologize for being so flustered / helpless with this but I have ran out of patience with this.

    Read the article

  • Help with Abstract Factory Pattern

    - by brazc0re
    I need help with a abstract factory pattern design. This question is a continuation of: Design help with parallel process I am really confused where I should be initializing all of the settings for each type of medium (ex: RS232, TCP/IP, etc). Attached is the drawing on how I am setting up the pattern: As shown, when a medium is created, each medium imposes a ICreateMedium interface. I would assume that the Create() method also create the proper object, such as SerialPort serialPort = new SerialPort("COM1", baud); however, TCPIPMedium would have an issue with the interface because it wouldn't need to initialize a serial port object. I know I am doing something majorly wrong here. I just can't figure it out and have been stuck for a while. What I also get confused on show the interface IMedium will get access to the communication object once it is created so it can write out the appropriate byte[] packet. Any guidance would be greatly appreciated. My main goal is to have the Communicator class spit a packet out without caring which type of medium is active.

    Read the article

  • Seeking recommendations on resolving sporadic network connectivity latency for Notes client

    - by Russell Maher
    I have Domino servers in geographically disperse data centers in the U.S. Sometimes when I open an NSF on one of those servers the connection times out then when I open the NSF again it connects immediately. This has been going on for years and during that time I have upgraded and changed my own internet connection and moved servers to different data centers. Of course I have direct connection documents using fixed IP addresses. When I do a Notes client Trace nothing is out of the ordinary. My business partner experiences the same thing from an entirely different city and different ISP but to the same servers. Never have any trouble connecting to the HTTP server, just over port 1352. Does anyone have any recommendations on a process to determine what is causing this problem?

    Read the article

  • raid 5 creation (using mdadm) lots of read/writes on creation: is this normal?

    - by Gbrits
    I created a software raid 5 disk using: mdadm -C /dev/md2 -l5 -n4 /dev/sd[i-l] at the same time I'm using dstat to see io-activity: dstat -c -d -D total,sda1,md2,sdi,sdj,sdk,sdl -l -m -n and notice that disks sd[i-k] are all read from and sdl is written to. Now, I do understand that raid5 has to be configured, but it takes a really long time and all disks are clean & formatted (using xfs) so I figure there might be some kind of shortcut to skip (unnecessary? ) checking.. Is it? The creation is part of a time-critical nightly batch-process (run on amazon ec2) so it's not a one-time thing. Thanks, Geert-Jan

    Read the article

  • How do I find all text with particular background color?

    - by Dave M G
    I have a LibreOffice Writer document that has undergone a process of editing, and sections of the text that needed to be rewritten were highlighted in yellow. As I fixed those sections, I removed the yellow highlight. Now, I want to make sure there are no remaining areas of highlighted text that have not been fixed or possibly the highlight was not removed. It's many hundreds of pages, so a manual scan is unfeasible. Also, it might be that one space or one character got accidentally left highlighted, and I want to ensure I've accounted for them all. How can I search the document to find all instances where text has been highlighted?

    Read the article

  • Setup Email Server (sendmail + dovecot + squirrelmail)

    - by henry
    I am in the process of setting up my very first email server. I can get everything up and running (thanks to apt-get). Manage to tie the users with system users. Now I am setting up virtual users for dovecot. But however, I also notice I can setup users in sendmail itself. Why is it so that you can setup users in 2 different places. Other mail server will send to the user in sendmail or dovecot?

    Read the article

  • Agressive Auto-Updating?

    - by MattiasK
    What do you guys think is best practice regarding auto-updating? Google Chrome for instance seems to auto-update itself as soon as it get's a chance without asking and I'm fine with it. I think most "normal" users benefits from updates being a transparent process. Then again, some more technical users might be miffed if you update their app without permission, as I see it there's 3 options: 1) Have a checkbox when installing that says "allow automatic updates" 2) Just have a preference somewhere that allows you to "disable automatic updates" so that you have to "check for updates manually" I'm leaning towards 2) because 1) feels like it might alienate non-technical users and I'd rather avoid installation queries if possible. Also I'm thinking about making it easy to downgrade if an upgrade (heaven forbid) causes trouble, what are your thoughts? Another question, even if auto-updates are automatically, perhaps they should be announced. If there's new features for example otherwise you might not realize and use them One thing that kinda scares me though is the security implications, someone could theorically hack my server and push out spyware/zombieware to all my customers. It seems that using digital signatures to prevent man-in-the-middle attacks is the least you could do otherwise you might be hooked up to a network that spoofs the address of of update server.

    Read the article

  • How to go about designing an intermediate routing filter program to accept input and forward accordingly?

    - by phileaton
    My predicament: I designed an app, written in Python, to read my mail and check for messages that contain a certain digital signature. It opens these and looks for keywords. If the message contains these keywords, certain related functions area executed on the computer. It is a way I can control my computer from my cell phone without being there. I am still in the beginning stages and it can only currently remotely open and close applications/processes. The obvious issue is security risks. I hoped to spearhead that by requiring and checking for that digital signature. However, my issue comes when I'd like to make this program usable by multiple users. The idea is that the user will send keywords: username and password, for instance, to log into their personal email account and send messages to it to be parsed. Please ignore the security implications of sending non-encoded passwords through email. (Though if you could help me on that part I'd much appreciate it as well, but currently, that is not the scope of my question.) My issue is designing an intermediary process that will take an email/password to read an email and scan for those keywords. The issue is, that the program has to be accessing an email to read the email for the username/password! I have got myself into a loop and cannot figure out how to have this required intermediary program. I could just create an arbitrary email account and have that check for login-creds, but is there a better way of doing this than that? Also, is there a better way of communicating with a computer remotely than this? Especially if the computer is not a server and is behind a router with only a subnet ip? If I am asking this question in the wrong place, I deeply apologize. Any help would be much appreciated!

    Read the article

  • How do I debug an upstart job?

    - by Cerales
    I have the following job in /etc/init/collector: start on runlevel [2345] stop on runlevel [!2345] expect daemon exec /usr/bin/twistd -y /path/to/my/tac/file When I start the job with sudo service collector start, it hangs. If I ctrl-c and run initctl list, I see this: collector start/killed, process 616 I can't see an instance of the twistd daemon in ps, and the HTTP server it's supposed to be providing does not exist. I even tried this without 'expect daemon' and with a simple call to a one-line bash script using a script stanza, and it still doesn't work. I think I'm doing something very wrong. What could it be?

    Read the article

  • What's faster, cp -R or unpacking tar.gz files?

    - by Buttle Butkus
    I have some tar.gz files that total many gigabytes on a CentOS system. Most of the tar.gz files are actually pretty small, but the ones with images are large. One is 7.7G, another is about 4G, and a couple around 1G. I have unpacked the files once already and now I want a second copy of all those files. I assumed that copying the unpacked files would be faster than re-unpacking them. But I started running cp -R about 10 minutes ago and so far less than 500M is copied. I feel certain that the unpacking process was faster. Am I right? And if so, why? It doesn't seem to make sense that unpacking would be faster than simply duplicating existing structures.

    Read the article

  • Aggressive Auto-Updating?

    - by MattiasK
    What do you guys think is best practice regarding auto-updating? Google Chrome for instance seems to auto-update itself as soon as it get's a chance without asking and I'm fine with it. I think most "normal" users benefits from updates being a transparent process. Then again, some more technical users might be miffed if you update their app without permission, as I see it there's 3 options: 1) Have a checkbox when installing that says "allow automatic updates" 2) Just have a preference somewhere that allows you to "disable automatic updates" so that you have to "check for updates manually" I'm leaning towards 2) because 1) feels like it might alienate non-technical users and I'd rather avoid installation queries if possible. Also I'm thinking about making it easy to downgrade if an upgrade (heaven forbid) causes trouble, what are your thoughts? Another question, even if auto-updates are automatically, perhaps they should be announced. If there's new features for example otherwise you might not realize and use them One thing that kinda scares me though is the security implications, someone could theorically hack my server and push out spyware/zombieware to all my customers. It seems that using digital signatures to prevent man-in-the-middle attacks is the least you could do otherwise you might be hooked up to a network that spoofs the address of of update server.

    Read the article

  • Is there an open source version check library and web app?

    - by user52485
    I'm a developer for a cross platform (Win, MacOS, Linux) open source C++ application. I would like to have the program occasionally check for the latest version from our web site. Between the security, privacy, and cross platform network issues, I'd rather not roll our own solution. It seems like this is a common enough thing that there 'ought' to be a library/app which will do this. Unfortunately, the searches I've tried come up empty. Ideally, the web app would track requests and process the logs into some nice reports (number of users, what version, what platform, frequency of use, maybe even geographical info from IP address, etc.). While appropriately respecting privacy, etc. What pre-existing tools can help solve this problem? Edits: I am looking for a reporting tool, not a dependency checker. Our project has the challenge of keeping up with our users. Most do not join the mailing list. Our project has not been picked up by major distributions -- most of our users are Windows/MacOS anyway. When a new version comes out, we have no way of informing our users of its existence. Development is moving pretty fast, major features added every few months. We would like to provide the user with a way to check for an updated version. While we're at it, we would like to use these requests for some simple & anonymous usage tracking (X users running version Y with Z frequency, etc.). We do not need/want something that auto-updates or tracks dependencies on the system. We are not currently worried about update size -- when the user chooses to update, we expect them to download the complete latest version. We would like to keep this as simple as possible.

    Read the article

  • Setting CPU cores off-limits to all threads not specified (preferably in Windows 7)

    - by Shinrai
    I have a really specific machine configuration in the works that would really be helped out if there were any way to do this...basically what I'm looking for is the opposite of setting CPU Affinity for a process. I want to be able to tell Windows "No applications except [x] are allowed on [these cores]." Is there any mechanism whatsoever for doing this? (Yes, I am aware of some of the potential issues this could cause and I normally would never fool with processor affinities, since the OS usually does a damned good job itself, but this is a pretty odd situation involving some software that is very CPU-bound constantly having to wait on interrupts and DPCs and things from other threads.)

    Read the article

  • Does Windows 8 support the "start in [folder]" property for shortcuts?

    - by FumbleFingers
    I use Foobar2000 to play music, and for years now I've run it in what they call "portable mode". What that means is that the program itself isn't actually "installed" in the traditional Windows sense. All "non-system" dll's required by the application are in the same folder as the executable; earlier versions of Windows find them there, and everything runs fine. But Windows 8 fails because it doesn't find them. I want things set up this way because I keep Foobar2000 on a portable external hard drive, so I can just move it between different computers without having to go through the Windows install process. With all previous versions of Windows, I could either directly run the application from File Explorer, or create a shortcut on the desktop with the "start in folder" property set to the actual folder containing the program. I can still use the first method, but I want a shortcut! Is there any way to do what I want?

    Read the article

  • batch file to disable network share on Windows XP

    - by Robb
    Loosely related to this question Network Share causing Cygwin to run slowly after 'ls', I'd like to write a little batch file that I can execute to disconnect the host from any network shares and subsequently another batch file to reconnect. Ideally, this would be something that I can execute from a PuTTY terminal, SSHed into the box running cygwin. I'm pretty sure the batch files can be written easily, but I don't know about executing them from a PuTTY terminal. Regardless, I'd still like the batchfiles anyways. For the sake of simplicity my process would be: Log into server via PuTTY Run batch files to disconnect shares Do what I need to do Run batch files to reconnect shares Exit session, closing PuTTY

    Read the article

  • A definition for a CPU second?

    - by dude
    Hey, I'm totally behind this topic. Yesterday I was doing profiling for some script I'm working on, and the unit for time spent was a 'CPU second'. Can anyone remind me with the definition of it? For example for some profiling I got: 200.750 CPU seconds. What does that supposed to mean? At other case and for time consuming process I got: -347.977 CPU seconds, a negative number! Is there anyway I can convert that time, to calendar time? Cheers,

    Read the article

  • Is there a difference in page fault rates between CPU bound and I/O bound processes?

    - by user198864
    I was thinking, should there be any difference in expectation of the page fault rate on CPU-bound vs I/O bound processes? At first I thought maybe we could, since CPU-bound processes would likely be using more memory accesses per time quantum, so I expect it would move from locality to locality faster. At the same time, the CPU-bound process is probably given a larger working set... but this doesn't affect the fault overhead as it hits a new locality IF this wasn't pre-paged in. Is there actually any real difference in the page fault rates or am I just musing about something nonexistent? And if there is, how would it impact a real-world OS like linux?

    Read the article

  • Nginx and low-speed connections: request terminates after 253 seconds

    - by meze
    I'm trying to make nginx to handle static files. All is working fine except that when I throttle my connection speed to 8kbit/s, the loading process of a file just stops after 253-255 seconds (4.2 min according to chrome). No error in the log, the status code is 200 but the response is received partially. If I disable nginx and make apache to send the same file - it loads successfully after 10 minutes. The config I use for debugging is: client_header_buffer_size 16k; large_client_header_buffers 4 8k; client_max_body_size 50m; client_body_buffer_size 16k; client_header_timeout 20m; client_body_timeout 20m; send_timeout 20m; Did i miss some configurations?

    Read the article

  • decouple software components via nameconvention

    - by csteinmueller
    I'm currently evaluating alternatives to refactor a drivermanagement. In my multitier architecture I have Baseclass DAL.Device //my entity Interfaces BL.IDriver //handles the dataprocessing between application and device BL.IDriverCreator //creates an IDriver from a Device BL.IDriverFactory //handles the driver creation requests Every specialization of Device has a corresponding IDriver implementation and a corresponding IDriverCreator implementation. At the moment the mapping is fix via a type check within the business layer / DriverFactory. That means every new driver needs a) changing code within the DriverFactory and b) referencing the new IDriver implementation / assembly. On a customers point of view that means, every new driver, used or not, needs a complex revalidation of their hardware environment, because it's a critical process. My first inspiration was to use a caliburn micro like nameconvention see Caliburn.Micro: Xaml Made Easy BL.RestDriver BL.RestDriverCreator DAL.RestDevice After receiving the RestDevicewithin the IDriverFactory I can load all driver dlls via reflection and do a namesplitting/comparing (extracting the xx from xxDriverCreator and xxDevice) Another idea would be a custom attribute (which also leads to comparing strings). My question: is that a good approach above layer borders? If not, what would be a good approach?

    Read the article

  • Installing nVidia drivers for Quadro FX 880M on 10.10 caused shutdown/startup issues.

    - by Chantz
    So I was facing weird graphics drivers issues due to the default nouveau drivers that came installed with Ubuntu 10.10 hence I installed the latest nVidia graphics drivers & I the weird graphics issues stopped happening. So far so good, but when I tried to shut down the laptop it got stuck at the window with text 15, shutting down... modem-manager: Caught signal 15, shutting down... init: Disconnected from system us init: dbus main process (1107) killed by TERM signal And this happens .everytime.without.fail. I tried updating the kernel and any/all drivers through update manager but it stil happens. Not only this even the startup screen is totally screwed up. It just displays Ubuntu 10.10 in text with 3 dots. But that is acceptable. To power cycle down the laptop each and everytime for shutdown is not cool. Same goes for when I try to restart. Interesting thing is if I try to shutdown the laptop when I am on the login screen it does so without any problems. I googled & many people seem to face the same issue but I couldn't find any silver bullet hoping to find one here.

    Read the article

  • ASP.NET MVC Cookbook - public review

    - by asiemer
    I have recently started writing another book.  The topic of this book is ASP.NET MVC.  This book differs from my previous book in that rather than working towards building one project from end to end - this book will demonstrate specific topics from end to end.  It is a recipe book (hence the cookbook name) and will be part of the Packt Publishing cookbook series.  An example recipe in this book might be how to consume JSON, creating a master /details page, jquery modal popups, custom ActionResults, etc.  Basically anything recipe oriented around the topic of ASP.NET MVC might be acceptable.  If you are interested in helping out with the review process you can join the "ASP.NET MVC 2 Cookbook-review" group on Google here: http://groups.google.com/group/aspnet-mvc-2-cookbook-review Currently the suggested TOC for the project is listed.  Also, chapters 1, 2, and most of 8 are posted.  Chapter 5 should be available tonight or tomorrow. In addition to reporting any errors that you might find (much appreciated), I am very interested in hearing about recipes that you want included, expanded, or removed (as being redundant or overly simple).  Any input is appreciated!  Hearing user feedback after the book is complete is a little late in my opinion (unless it is positive feedback of course). Thank you!

    Read the article

  • Solaris 10: Identify a PID and the CPU it's running on

    - by Marcus
    I have multiple instances of a database running on a Solaris system. I'd like to prove that each database process is being handled by a different CPU. Essentially, I want to be able to do something like a ps -ef | grep <process_name> to get the PIDs and then run another command (if required) to identify the CPU... Is prstat able to do this? I'm making an assumption that as each database instance is started each one uses a different CPU. I'm not sure if I'm understanding this correctly... The reason I want to do this is because Sun hardware has slow CPU's, but lots of them. Therefore, to get the best performance out of it, I need to try and spread the load among CPU's... Thanks

    Read the article

  • Solaris 10: Identify a PID and the CPU it's running on

    - by Marcus
    I have multiple instances of a database running on a Solaris system. I'd like to prove that each database process is being handled by a different CPU. Essentially, I want to be able to do something like a ps -ef | grep <process_name> to get the PIDs and then run another command (if required) to identify the CPU... Is prstat able to do this? I'm making an assumption that as each database instance is started each one uses a different CPU. I'm not sure if I'm understanding this correctly... The reason I want to do this is because Sun hardware has slow CPU's, but lots of them. Therefore, to get the best performance out of it, I need to try and spread the load among CPU's... Thanks

    Read the article

  • Create and copy a Windows Mobile 5.0 operating system image

    - by user20119
    We have several dozen Windows Mobile 5.0 devices (Symbol MC7095 handhelds equipped with embedded Verizon WLAN, if that matters) that all need the same software and configuration. We connect all of these devices via a USB cradle to add software to them via Microsoft ActiveSync, and then do several configuration changes directly on the handhelds themselves, in the OS. That process takes 30 minutes or more, per device. Is there any way to set up one device and take a 'disk image' of the entire OS/software, such that things could then be copied (quickly/easily) to the other devices? Is such a thing possible, with Windows Mobile devices?

    Read the article

  • Is Clonezilla a good option for a daily batch-file-based backup of a Windows XP PC?

    - by rossmcm
    Having just been through the process of rebuilding a Windows XP desktop machine when the disk died, I'm anxious to make it a lot less painful. I didn't lose any data, but reinstalling everything took ages. Clonezilla seems to be a highly mentioned free backup tool. How easy would it be to implement the following: a nightly unattended backup of the desktop's disk image to another network machine (or a second drive in the machine), hopefully with compression. restore from that image using USB boot media. so that if I come in to work and find the hard drive has tanked, it is just a matter of replacing the dead drive with a new one, booting from the USB stick, choosing the image to restore, and then finding something else to do for an hour or two. When it is finished I would hopefully be back to where I was.

    Read the article

< Previous Page | 550 551 552 553 554 555 556 557 558 559 560 561  | Next Page >