Search Results

Search found 18961 results on 759 pages for 'far se'.

Page 114/759 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • Why is process not being displayed by TOP

    - by drN
    I am running a Mathematica script (this question probably doesn't fit in Mathematica.SE however) and I know that it generally takes up a lot of RAM and loads up my cores. However, althought pgrep MathKernel is showing a pid, I find that top doesn't show this in the top processes, although I notice that it is taking up about 2.25GB of the 8GB available to me. pmap -x my_process_id total kB 2243132 1907404 1892108 AND ps aux | grep MathKernel dnaneet 20837 12.6 23.3 2234944 1907404 pts/1 Sl 09:23 8:01 /share/apps/mathematica/8.0.4/SystemFiles/Kernel/Binaries/Linux-x86-64/MathKernel -runfirst $TopDirectory="/share/apps/mathematica/8.0.4" -script ./dcm_10micrometer_2x -- ./dcm_10micrometer_2x ps aux shows that the process is taking about 12% (In asterisks) dnaneet 20601 0.0 0.0 68264 1660 pts/1 Ss 09:15 0:00 -bash **dnaneet 20837 12.2 23.3 2234944 1907404 pts/1 Sl 09:23 8:01 /share/apps/mat** dnaneet 21922 0.0 0.0 65604 948 pts/1 R+ 10:29 0:00 ps -aux Did this process fail and is the MathKernel just lingering?

    Read the article

  • Offline productivity

    - by Frank Meulenaar
    On some days I'm commuting 2hs (oneway) in the train. I don't have any mobile internet nor is there always WiFi service in the train. Because of security reasons I can't do any work in the train so I'm trying to work on my geek time. I'm looking for general solutions on how to do this (I'm on FireFox/Windows but I don't think it matters) Email works perfectly with gmail offline. It syncs directly when online and remembers complicated stuff. So far I used the ScrapBook plugin to store an website. It works good, but I have to download my favorite news page every day again - I want it to sync as soon as possible. It would even be more awesome if I could click a page on my desktop and my laptop would sync as soon as it has the chance. (edit: maybe the autosave plugin for scrapbook can do this) Similarily, I use the Downloadhelper plugin to download youtube vids, but I'd like something that automatically downloads videos from a given channel. Any tips are welcome. So far my early morning schedule is: wake up, power on laptop, make coffee, power off laptop and leave within 10 minutes (enough time for GMail to sync) but I can imagine a system where my laptop stays on during the night (or boots before I wake (and makes me coffee :])).

    Read the article

  • 7-Zip many files from different folders?

    - by mafutrct
    I would like to add a large number of files with different names from different folders to a single 7-Zip archive using 7za.exe. This should be simple, but it turned out to be a major pain. I created a file that contains the paths (7za a out.7z @list.txt), but once there are too many (~100) files, it fails. Apparently the content of the argument file is pushed onto the command line buffer [Edit: This was likely a misinformation on my part, either way it was not the reason], which is far too small (the number of files to add is more than one million). Splitting the process up by adding the files one by one is not feasible due to the way 7za works: When adding the next file, it creates a copy of the archive, adds the file to the copy and finally replaces the original. This is terribly slow once the archive gets to a couple 100 MB in size. So far I am using a combination of the two approaches by adding a dozen files each time in a loop, but it is an unreliable hack and still very slow. Is there a better way to do it? I tried to use 7-Zip wrapper DLLs (I'm a C# programmer), but none of them worked reliably and I was repeatedly suggested to just use 7za instead.

    Read the article

  • virtual web folder served by PHP script

    - by Martin
    I am trying to configure my apache to be able to display (virtual) pages like: mywebpage.com/something1 mywebpage.com/something2 mywebpage.com/folder/something3 I would like these "somethingX" and "folder" folders to be only virtual, not physical directories. For a start it would be great to send all requests to mywebpage to one PHP script which will somehow receive the original path information (there is some SERVER array as far as I know) and call necessary PHP functions (so far I use addresses like mywebpage.com/index.php?page=blabla&otherparameters=values...). Is that possible? I am struggling with different combination, currently I am with following file in /etc/apache2/conf.d/something.conf (not working of course). What is the correct way to proceed? Thanks. <Location /myweb> SetHandler my-handler Action my-handler /srv/www/htdocs/myweb/product.php virtual </Location> My pages are in /srv/www/htdocs/myweb. I tried with Location, with Directory, with Action and SetHandler, with AddHandler... ;-) Some configurations were ignored, some caused "object not found" with nothing relevant in error log.

    Read the article

  • Long wait until POST...

    - by Wesley
    Here are the specs to put things into context: ECS P4VXASD2+ (V5.0) motherboard Intel Pentium 4 Northwood 2.8 GHz (512 KB L2, 533 MHz FSB) 2x 512 MB PC2100 DDR266 RAM 128 MB NVIDIA GeForce FX 5200 AGP WD Caviar SE 80 GB IDE HDD Gigabyte CD-RW drive OKIA 300W ATX PSU So, everytime I try to boot up this computer, it takes at least 10-15 seconds before it will POST. All my other machines will post within 1-2 seconds, but this one takes a particularly long time. I've read suggestions from a Google search to swap the CMOS battery, check BIOS settings, and double check CMOS jumper. Still after follow those, it takes a while to POST. What else could be causing a long delay before POSTing?

    Read the article

  • Strange File-Server I/O Spikes - What Is Causing This?

    - by CruftRemover
    I am currently having a problem with a small Linux server that is providing file-sharing services to four Windows 7 32-bit clients. The server is an AMD PhenomX3 with two Western Digital 10EADS (1TB) drives, attached to a Gigabyte GA-MA770T-UD3 mainboard and running Ubuntu Server 10.04.1 LTS. The client machines are taking an extremely long time to access/transfer data on the file server. Applications often become non-responsive while trying to open files located remotely, or one program attempting to open a file but having to wait will prevent other software from accessing network resources at all. Other examples include one image taking 20 seconds or more to open, and in one instance a user waited 110 seconds for Microsoft Word 2007 to save a document. I had initially thought the problem was network-related, but this appears not to be the case. All cables and switches have been tested (one cable was replaced) for verification. This was additionally confirmed when closing down all client machines and rebooting the server resulted in the hard-drive light staying on solid during the startup process. For the first 15 minutes during boot, logon and after logging on (with no client machines attached), the system displayed a load average of 4 or higher. Symptoms included waiting several minutes for the logon prompt to appear, and then several minutes for the password prompt to appear after typing in a user name. After logon, it also took upwards of 45 seconds for the 'smartctl' man page to appear after the command 'man smartctl' was issued. After 15 minutes of this behaviour, the load average dropped to around 0.02 and the machine behaved normally. I have also considered that the problem is hard-drive-related, however diagnostic programs reveal no drive problems. Western Digital DLG, Spinrite and SMARTUDM show no abnormal characteristics - the drives are in perfect health as far as the hardware is concerned. I have thus far been completely unable to track down the cause of this problem, so any help is greatly appreciated. Requested Information: Output of 'free' hxxp://pastebin.com/mfsJS8HS (stupid spam filter) The command 'hdparm -d /dev/sda1' reports: HDIO_GET_DMA failed: Inappropriate ioctl for device (the BIOS is set to AHCI - I probably should have mentioned that).

    Read the article

  • Export-Mailbox - fails with large folders

    - by grojo
    I am trying to move messages from a rather large mailbox to an archive mailbox. However I run into errors all the time. the command I am executing is Export-Mailbox -Identity MAILBOX_FROM -TargetMailbox ARCHIVE -TargetFolder ARCHIVE_FOLDER -StartDate 2009-02-01 -EndDate 2009-02-28 -DeleteContent -Confirm:$false I can copy/move some messages, but run into frequent "an unknown error has occurred" (statuscode -1056749164) I run the console as administrative user, and all permissions are set right, as far as I can tell. I've restricted the start and end dates in case the number of messages moved/deleted should create problems. Anything I am missing in my setup? Corrupted messages? Over-limit message sizes? Update: What I've learnt so far, is that folder with more than approx 3000 messages will generate errors. If mail retention is set (default 30 days), Export-Mailbox will scan all messages whether these were deleted in previous runs or not, and date restriction to limit number of messages will not work. To avoid errors, I've switched off deleted message retention for the mailbox, and moved the messages from one large folder to multiple folders, and moved these one by one...

    Read the article

  • Windows Java-based apps not working

    - by DariusVE
    After updating the Java JRE to 7u25, many Java-based applications not works as usual, the I upgrade Java to 7u45, and still the apps not working. Minecraft start screen not showing, have to press TAB key to select the Run button and press ENTER to run the game. Netbeans IDE it's running, but none it's showing on the screen. Eclipse and JDownloader are working fine. I cannot run the Java Control Panel, it's only shows the Java icon at the taskbar. My System OS: Windows 7 Ultimate SP1 64Bits Java: java version "1.7.0_45" Java(TM) SE Runtime Environment (build 1.7.0_45-b18) Java HotSpot(TM) Client VM (build 24.45-b08, mixed mode, sharing)

    Read the article

  • Command output as string

    - by rik
    I want to get output from command C:\Program Files (x86)\Java\jre7\bin\java.exe" -version as string variable. I tried this way: $out = &"C:\Program Files (x86)\Java\jre7\bin\java.exe" -version but it gives error message: java.exe : java version "1.7.0_05" At line:1 char:9 + $out = & <<<< "C:\Program Files (x86)\Java\jre7\bin\java.exe" -version + CategoryInfo : NotSpecified: (java version "1.7.0_05":String) [], RemoteException + FullyQualifiedErrorId : NativeCommandError Java(TM) SE Runtime Environment (build 1.7.0_05-b05) Java HotSpot(TM) Client VM (build 23.1-b03, mixed mode, sharing) $out variable seems empty. What am I doing wrong?

    Read the article

  • Installing FreeNAS 8.3 problems

    - by osij2is
    I'm trying to install FreeNAS 8.3 on some desktop-level hardware (AMD Phenom + 890FX + 16GB) and I've been unsuccessful. I initially tried using a USB stick and followed the instructions on the FreeNAS site here. Making the USB was simple as the instructions laid out, but as soon as the USB is detected (during the boot process) some text appears and quickly vanishes and my machine reboots infinitely. After trying several different was to make the USB, I tried using a DVD-ROM but again, I had the same issue as the USB stick. This leads me to conclude that either a BIOS setting is incorrect but I have no idea which one. I've changed the BIOS to not "fast" boot per se, and I've correctly configured the boot order per USB stick and the DVD-ROM drive so I know that it's working. Have I missed anything that might be causing this problem? I'm not a FreeBSD/FreeNAS expert by any means.

    Read the article

  • Server downtime - are these APC warnings the cause?

    - by DisgruntledGoat
    Yesterday I had a problem with my dedicated server (Ubuntu 10.04, LAMP). It wasn't down per se, but running incredibly slowly as if we had a massive overload of visitors (though I don't think we did). It's running smoothly again now. I've been checking through log files etc to see if I can find any issues, the only strange thing is a bunch of these errors, occurring at about the same time as the downtime: [apc-warning] Unable to allocate memory for pool. in [file] on line 49. And a bit later on: [apc-warning] GC cache entry '[file1]' (dev=2056 ino=8988092) was on gc-list for 3601 seconds in [file2] on line 746. Could these errors indicate the cause of the server slowdown, or are they simply a result of the server being slow in the first place? What would be the solution?

    Read the article

  • Distributed website server redundancy

    - by Keith Lion
    Assume a website infrastructure is very complicated and is fully distributed (probably like most large web companies). Am I right in thinking that although there are all these extra web servers to handle multiple client requests, there is still a single "machine" whereby users must enter? I am guessing this machine will be the one physically associated to the IP address? I ask because I need to know whether, in places where distributed systems exist, there is still a single point of failure- usually the control node or, in this example, the machine connected to the public internet? Surely there cannot be two machines connected to the internet, as they would have to have different IP addresses? This "machine" may not be a server per se, but maybe it is a piece of cisco equipment. I just need to know whether, in the real world, these distributed systems still have a particular section where they depend on the integrity of one electronic device?

    Read the article

  • Cisco configuration for public library internet

    - by AlternateZ
    I'm a C/C++ computer programmer turned IT support guy working for a public library. My day is usually spent helping random grandparents learn how to use email, so my networking knowledge is limited to what I can glean from google. Here's the situation. We have a public library with 20 PCs on a LAN and also public wifi access. Previously we were running all of this on 1 ADSL connection and people complained about low speeds. We hired a networking company to set up a Cisco dual-WAN router for us, and purchased an additional ADSL connection. The intention was to give the LAN PCs a guaranteed amount of bandwidth each, and then let the wifi users split the rest. The results were far worse than what we expected, and all we got from the company was excuses and they've since washed their hands of us. During busy periods, net performance on the LAN PCs are so poor that attaching files to gmail etc often times out and fails - far from the "guaranteed amount of bandwidth each" that we hope for! Sometimes it feels like performance is worse than before when we had 1 ADSL link and an unconfigured router? Anyways, surely this is a problem encountered a million times over across the world? (Sharing internet across many users effectively.) What are standard solutions for something like this? I admit to not even knowing the right jargon to google for (load balancing?) I'd appreciate any links to resources/guides that might help me get a better understanding of the problem/solutions, and perhaps some stories of your own experience in solving similar problems. This will help us evaluate and negotiate with network consultants in the future. If its relevant, our router config contains a section "policy-map" with "bandwidth percent" for each class of user (LAN, wifi), and "fair queue".

    Read the article

  • Setup ejabberd with SQL Server 2008

    - by wonster
    Here's what I have got so far. Windows 2008 Server 64 bit. Installed the latest version of ejabberd, ejabberd-2.1.8-windows-installer.exe. The windows service starts up fine but seems ineffective. However, using the start & stop scripts work. I am able to login to the admin page which so far doesn't seem that versatile. Opened up ports 5222, 5226 and 5280 for my workstation to talk to the server. I've got Spark and Jabbear Windows clients to register, login and instant message with multiple accounts using the server. After confirming that I've got the very basics working, I've decided to make use of SQL Server 2008 as the database. Reason? Mainly, I am very comfortable with SQL Server. I can deal with redundancy, failover, data analysis easily. Not sure if ejabberd's built in DB provides all that. Following the instructions from ejabberd's documentation, I setup a system DSN that points to another physical database. The DSN checks out fine. (Tried both Named Pipes and TCP/IP) Modified ejabberd.cfg. Commented line %%{auth_method, internal} and uncommented line {auth_method, odbc} Uncommented and modified {odbc_server, "DSN=ejabberd;UID=somelogin;PWD=somepassword"}. After making these changes, I restarted. No errors are found in the log files. The jabber clients are no longer able to register new accounts. I'm not sure where to look for errors besides the /logs/ folder as I'm new to all this. I am basically stuck here on step 5. Has anyone got this setup to work recently? Some of the posts I've found around are years old and of no help. I can't be the only one setting up ejabberd with MS SQL. Any help would be appreciated!

    Read the article

  • NAT : understanding about interconnection

    - by PITCHY
    English version below J'ai 2 routeurs A et B relié en série avec les ip respectives ( 10.0.0.1/30 10.0.0.2/30) sur le routeur A j'ai activé la fonction NAT avec un pool (200.0.0.1 - 200.0.0.15/28). Lorsque je sors je prends donc un ip du pool par exemple 200.0.0.10. Comment ça fonctionne sachant que ma nouvelle ip (200.0.0.10) ne se trouve pas sur le meme réseau que mon interface de destination (10.0.0.2)? English: I have 2 routers A and B, interconnected with a serial connection, with the ip's 10.0.0.1/30 for A and 10.0.0.2/30 for B. On router A NAT was activated with the pool 200.0.0.1 - 200.0.0.15/28. When connection to this router, I get an ip from the pool, for example 200.0.0.10. Knowing my new ip is 200.0.0.10, which is not on the same network as my destination interface (10.0.0.2), how can this work?

    Read the article

  • Sharepoint: authenticating users via forms authentication

    - by sbee
    My problem is the following(sharepoint Newbie) , i want to change the default zone from being a Windows Authenticated Zone to a Forms Authenticated Zone ,thereby forcing the site collection administrator to log in via forms authentication and not windows also the sharepoint users will be accesing the site internally my goal is to effectively replace windows authentication with forms authentication as my company does not have active directory installed. So far i have created an ASP Application that adds the users to the database,the database was created via the .Net Framework Asp tool(Asp reg_sql),however when i change the default zone to the AspNetSqlMembershipProvider(Forms) and attempt to add my site collection administrator via the Central admistrator, i get the following error "No Exact Match found" as shown on the screenshot. My inkling is that somehow the people picker is failing to read the users from the database but reasearch on correcting that thus far has proved fruitless. I have made all the relevant changes on the these sites(Central admin site,My test site & Add Users site) config files.Changes are the following(Membeship Provider,Connection String,People Picker) i left out the role provider for now as it is optional. Help on this would ge highly appreciated...

    Read the article

  • Coloring of collapsed threads in mutt

    - by Rich
    I'm trying to figure out the syntax of colouring collapsed threads in the mutt index. The documentation for mutt patterns doesn't seem to include a description of how this works, and so far I've been completely unable to figure it out by trial and error. What I'd like is for collapsed threads that contain any unread (new) messages to be always coloured green. If collapsed threads with no unread messages contain any flagged messages, then I'd like them to be red. So far, every set of patterns I've tried results in threads that contain both flagged and unread messages being coloured red (I want them green). These work: color index green default "~N" # unread messages color index green default "~N~F" # unread flagged messages color index red default "~F" # flagged messages color index green default "~v~(~N)" # collapsed thread with unread But these don't: color index green default "~v~(~N~F)" # attempt to keep threads with unread green color index red default "~v~(~F)" # colours collapsed threads with flagged and unread red color index red default "~v~(!~N~F)" # ditto color index red default "~v~(^!~N~F)" # ditto color index red default "~v~(~F)~(!~N)" # ditto color index red default "~v~(~F)~v~(!~N)" # ditto I've also tried switching the order of the "~v~(~F)" and "~v~(~N)" commands in the file, but the "flagged" rule always seems to take precedence over the "new" rule. Ideally I'd like to understand how the syntax for colouring collapsed threads works, but at this point I'd happily settle for a set of rules that achieves the colourscheme described above.

    Read the article

  • Library conflict in Mac OS X

    - by Juan Medín
    I was trying to install the ImageMagick library on Mac OS X Snow Leopard, and first I tried port and, after it failed, homebrew. It updated some dependencies and installed ImageMagick without problems. So far so good. The problem came when I ran Apache. I got the following error in the system log: 07/04/11 12:55:15 org.apache.httpd[41841] httpd: Syntax error on line 115 of /private/etc/apache2/httpd.conf: Cannot load /opt/local/apache2/modules/libphp5.so into server: dlopen(/opt/local/apache2/modules/libphp5.so, 10): Library not loaded: /opt/local/lib/libpng12.0.dylib\n Referenced from: /opt/local/apache2/modules/libphp5.so\n Reason: image not found I checked the /opt/local/lib and surprise! I don't have the libpng12.0 but the libpng14.0. So, as far as I can tell, something went wrong installing the ImageMagick library. Now, I can't find a way to rollback to the previous libraries, other than copying them from the backup. Do you know if is there a way to recover the previous state or reinstall Apache? Or is this just a corrupt state and I must reinstall OS X?

    Read the article

  • Intel HD 4000 driver not working

    - by Sagar Parakh
    I have a Dell Inspiron 15r se 7520. I have upgraded my system to Windows 8.1 few days back. After the upgrade, my Intel HD 4000 graphics driver stopped working. I have downloaded the latest driver from Dell website but during installation it said that my graphics driver is not compatible or validated and also my dedicated graphics driver AMD ATI Raedon HD 7730m also stopped working. There is also a problem with my screen brightness: I am unable to change it. How to make my graphics driver work?

    Read the article

  • Windows 7 scheduled task returns 0x2

    - by demmith
    I have identical scheduled tasks running in Windows XP Pro and Windows 7. The XP Pro one runs fine, the Windows 7 one always returns 0x2 (which means, "The system cannot find the file specified"; however, executing from the command line is no problem) in the Last Run Result column of the Task Scheduler UI. The scheduled task executes a .bat file daily. The .bat file contains a call to execute a Perl script. As I stated in the previous paragraph, it executes under XP without any trouble but under Windows 7, no dice. The task under Windows 7 is set to "run whether the user is logged on or not." In this case it is me, I am the only user of the system. It is also set to "Run with highest privileges." And it is not hidden. The .bat file executes perfectly well from the command line - it calls the Perl script as expected and the Perl script does its thing. I have searched far and wide looking for an appropriate answer to this issue. So far I have found nothing. What the devil is going on with this Win7 scheduled task? I am ready to pull my hair out.

    Read the article

  • Private VOIP network

    - by SuppositoryPlacebo
    I own a small private security services business. Some of my clients require 2-10 security officers per location. I'm trying to think outside the box in order to solve my communications problem. I'd rather not buy or lease hf radios or voip systems at the current rates. I'm wondering if there is an existing system, or if is at all possible, to setup a private communications network using only a server and bluetooth devices or a wifi/bluethooth combination. http://25.media.tumblr.com/tumblr_m44f0pn7BL1rwp6tgo1_1280.png I don't need "radio" per se. I just need a simple, private voip network. Is there an existing device that consists of nothing more than a wifi adapter to control a bluetooth device?

    Read the article

  • How to setup a wired local area network in Windows 7?

    - by user883434
    I am using a Lenovo ThinkPad X200 and want to se tup a wired local area network. How could I achieve this? Actually, my question is very simple. I just want to connect to the internet and I have a cable connection at home. So, I just want to plug the cable line to my notebook (x200) so that I can access the internet via the cable. But I dont know how to setup a local area connection in notebook. Since it automatically appear in my desktop, but not in my notebook. Thanks!

    Read the article

  • Apache SMTP connection times out

    - by Kaivosukeltaja
    A web server that has successfully sent mail using the hosting providers's SMTP server before seems to suddenly have lost connection to the SMTP server. [Wed Nov 28 09:51:27 2012] [error] [client 10.250.11.81] PHP Warning: fsockopen(): unable to connect to smtp.ourprovider.net:25 (Connection timed out) in /var/www/(....)/phpmailer/class.smtp.php on line 105, referer: http://oursite.net/sendmessage.php# If I telnet to the SMTP server's port 25 manually from the web server, I'm able to connect and send mail with no problems whatsoever. The web server is running RHEL 6.3 and Apache 2.2.15. The SE boolean httpd_can_network_connect is on. Our PHP version is 5.3.3. Where should I start looking to fix this?

    Read the article

  • Extensions disappear when I close and open Google Chrome

    - by PavanM
    I am running the latest version of Google Chrome 23.0.1271.97 (Official Build 171054) m on Windows 7. Any new extension I install simply disappears(not disabled, total disappearance) once I close and re-start google chrome. This is not happening to one of my old extension. It stays there across chrome re-starts. I tried everything google help suggested- I created new user profile by renaming the Defaults folder I checked for any permission change that the extensions might have undergone. This is not the case. I am not running in developer mode. This happens when I close ALL instances of google chrome. Even if one instance of chrome is running, this doesn't happen. But I cant have an instance of Google Chrome always running :( I even reported the issue to Google Chrome team to no avail and new.crbug.com is offline. And I skimmed through many threads opened for the same issue only to find souls like me. SE is my last resort :)

    Read the article

  • Is there a way to do a sector level copy/clone from one hard drive to another?

    - by irrational John
    Without going into distracting details, I'm attempting to duplicate the contents of the 500GB drive in my MacBook to another 500GB drive. But this is turning out to be an unexpected hassle because the drive contains both the OS X partition and an NTFS partition with Win 7 via Apple's Boot Camp. With the exception of Clonezilla, the tools I have looked at so far all have some limitation. The Mac tools don't want to deal with the NTFS partition. The Windows tools are totally clueless about either the HFS+ partition and/or the hybrid MBR/GPT Boot Camp partitioning. Clonezilla looked like it would do what I want but apparently I can't figure out how to use it. After doing what I thought was a sector to sector copy I found that only the NTFS partition had been migrated. The others were apparently empty. (And frankly, I'm not positive Clonezilla migrated the partition table correctly either). Note: It takes over 2 hours using SATA to read/write all sectors with these drives. So I'm not up for using trial & error to narrow in on the right combination of Clonezilla options to use. I'm beginning to think that maybe the answer is to boot Linux (probably Ubuntu) and then use some ancient BSD command. Trouble is I don't know what command (or parameters to use) in order to do a sector level copy from one drive to another. As far as I know the drives have the same number of sectors so this should be trivial. Sigh.

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >