Search Results

Search found 17519 results on 701 pages for 'live environment'.

Page 550/701 | < Previous Page | 546 547 548 549 550 551 552 553 554 555 556 557  | Next Page >

  • My Boot order changed. Why?

    - by Chris
    I have a laptop running Windows XP SP3 with one internal hard drive partitioned into C: (system), D: (storage) and I have an external hard drive, F: (external drive). Yesterday the machine was running fine. Today, I go back to it and see that it's just showing a blinking cursor. Checked through the BIOS and the hard drive checked out fine. CTRL-ALT-DELETED the machine a few times, but I was never able to boot back into the operating system. I threw in a live CD and found out that the boot order of the drives has changed. The external drive is now C:, the system partiton is D:, and the storage partition is E:. Does anyone have any idea of how or why this would have occured? Auto system updates are turned off so there should have been no automatic reboot of the system overnight, and anti-virius runs on the machine and has found no infections before this occured. Edit When I was looking through the BIOS of the machine, I did see that the boot order was changed. But still the same question remains, what would have caused this to happen? I can't believe that a random reboot happened and totally changed my hard drive setup.

    Read the article

  • Joining Samba to Active Directory with local user authentication

    - by Ansel Pol
    I apologise that this is somewhat incoherent, but hopefully someone will be able to make enough sense of this to understand what I'm trying to achieve and provide pointers. I have a machine with two network interfaces connected to two different networks (one of which it's providing several other services for, such as DNS), running two separate instances of Samba, one bound to each interface. One of the instances is just a workgroup-style setup using share-level authentication, which is all working fine. The problem is that I'm looking to join the other instance to an MS Active Directory domain (provided by MS Windows Small Business Server 2003) to enable a subset of the domain users to access the shares from Windows machines on the other network. The users who need access from the domain environment have accounts (whose names are all-lowercase versions of their domain usernames) on the machine running Samba, but I'm not sure about how to map the UIDs and everything I've read concerns authenticating accounts on that machine against either AD or another LDAP server. To clarify: I only want the credentials for AD users accessing the non-workgroup Samba instance to be authenticated against AD, not the accounts on the machine running Samba. I hope this is sufficiently clear. EDIT: In addition to being able to access the Samba shares from AD, I do also need to be able to access a share on the domain from the machine running Samba but would still like everything non-Samba-related to authenticate locally.

    Read the article

  • GitLab post-receive hook not firing

    - by Ben Graham
    Apologies if this isn't the right stackexchange. I have a GitLab install. It was installed over the top of a gitolite install that was only a few days old, and I assume this non-standard setup is at the root of my problem, but I cannot pin it down. The problem is straightforward: post-receive hooks are not fired. This prevents 'project activity' appearing in GitLab. The problem looks like: $ git push #... error: cannot run hooks/post-receive: No such file or directory Hook Exists The post-receive hook/symlink exists and is executable: -rwxr-xr-x 1 git git 470 Oct 3 2012 .gitolite/hooks/common/post-receive lrwxrwxrwx 1 git git 45 Oct 3 2012 repositories/project.git/hooks/post-receive -> /home/git/.gitolite/hooks/common/post-receive It's Executable By GitLab The gitlab user can execute the script (I have removed the /dev/null redirect and fed in blank input to get an 'OK' as output): sudo su - gitlab -c /home/git/.gitolite/hooks/common/post-receive OK GitLab Can Find It GitLab is looking for hooks in the correct location: $ grep hooks /srv/gitlab/gitlab/config/gitlab.yml hooks_path: /home/git/.gitolite/hooks/ and $ bundle exec rake gitlab:app:status RAILS_ENV=production # ... /home/git/.gitolite/hooks/common/post-receive exists? ............YES Environment The env -i line in the hook is commonly cited as an issue. I think that would occur after this problem, but for completeness, redis-cli is found OK: $ env -i redis-cli redis> I've run out of debugging ideas on this one. Does anybody have any suggestions?

    Read the article

  • Virus ridden computer freezes on startup - can't access safe mode

    - by Eric
    Someone whom I love but who cannot be trusted with a live internet connection downloaded a particularly nasty virus that in turn downloaded a variety of unknown other viruses onto my home computer. The computer now freezes completely a few seconds after reaching the desktop and is unresponsive to any keyboard or mouse command. There are videos of my little kid on this hard drive that are not backed up and that I cannot bear to lose. But if I could get in there long enough to copy them off to an external drive I would have no problem doing a clean windows install to fix the problem; everything else is backed up online but the videos were too large. Normally I would start by going into safe mode but I have a large Dell monitor that doesn't show anything until the welcome screen appears. I think that I have gotten into the setup screen once or twice by mashing keys before I can see anything, but this monitor doesn't support that so I can't see what I'm doing to get it to boot from CD or anything else. I'm at my wits end. Any advice?

    Read the article

  • Motion - takes snapshot without motion detected

    - by Emmanuel Brunet
    I've been installed the standard motion 3.2.12 package on debian 7.5. I would like to get snapshot ONLY when motion is detected, but it still saves a picture every second without any activity in front of the camera. I'm using a TENVIS JPT3815W IP camera motion.conf here is my configuration file setup_mode off target_dir /media/videos/log/webcam netcam_url http://webcam/snapshot.cgi netcam_tolerant_check on netcam_userpass admin:alpha1237 # Output frames at 1 fps when no motion is detected and increase to the # rate given by webcam_maxrate when motion is detected (default: off) webcam_motion off output_all off # detection settings 1-255 default 32 noise_level 50 # Maximum framerate for webcam streams (default: 1) webcam_maxrate 25 pre_capture 0 framerate 25 gap 30 locate on mail [email protected] text_right "FRONT CAMERA %Y/%m/%d - %T" text_double on ffmpeg_cap_new on ffmpeg_cap_motion on ffmpeg_video_codec mpeg4 output_motion off snapshot_interval 0 # Quality of the jpeg (in percent) images produced (default: 50) quality 90 # Restrict webcam connections to localhost only (default: on) webcam_localhost off # Limits the number of images per connection (default: 0 = unlimited) # Number can be defined by multiplying actual webcam rate by desired number of seconds # Actual webcam rate is the smallest of the numbers framerate and webcam_maxrate webcam_limit 0 Issue when I start motion images are stored in /media/videos/log/webcam nearly every second. I hjust want to get images when a motion is detected and the according video clip Any idea where the configuration fails ?

    Read the article

  • Is it a good idea to run Redmine using Webrick through Nginx?

    - by Rohit
    The task here is to get Redmine setup for a small (<20) team. There may be a few users who would access the setup as business clients. I am familiar with setting up PHP for Apache, and recently, Nginx. I am not familiar with Ruby, Ruby-On-Rails, etc. I prefer to use the OS's (Ubuntu Linux LTS) package manager to install the different components as it takes care of dependencies and updates. I have setup Nginx with PHP-FPM successfully and am struggling with Redmine. As suggested here, I got Redmine running on port 3000. # /etc/init/redmine.conf # Redmine description "Redmine" start on runlevel [2345] stop on runlevel [!2345] expect daemon exec ruby /usr/share/redmine/script/server webrick -e production -b 0.0.0.0 -d And using the Nginx config on this page, I used Nginx to proxy requests to Webrick. server { listen 80; server_name myredmine.example.com; location / { proxy_pass http://127.0.0.1:3000; } } This works well locally. I wanted some opinions before trying this out on the live box (a 256 MB VPS). Further, should I use something like monit to monitor webrick for failure?

    Read the article

  • replacing the default console emulator under Windows XP

    - by Gilles
    How can I replace the default program providing console windows under Windows XP? I know of alternative programs, and I have a shortcut to start cmd.exe in Console2. But now I want console applications to start in Console2 rather than the default console program, even when I have no control over the program that starts the console application. (I.e. a non-console program starts consoleapp.exe, and I can't change it to start Console2 instead, but I still want the application to be started inside a new instance of Console2.) (Note that I want to replace the console itself, that is, the window in which console (i.e. text mode) applications run. And I must be able to run arbitrary, unmodified console applications: a substitute for a specific console program such as Cmd won't do me any good.) EDIT: So what I'm after is a CSRSS replacement, which leads to OT: I want to know when Microsoft is going to make a decent CSRSS replacement. Not being able to adjust the width of a "terminal" by resizing the window is a complete joke. Go download the ISE already. (It's included in Win7/2008R2.) But as far as I understand this ISE is an environment for Powershell, not a general console emulator.

    Read the article

  • Why does MOSS sometimes delete an existing user from a site?

    - by Jesse
    I'm experiencing an issue with a MOSS installation. I am using the Site Settings Permissions to add an Active Directory account as a valid user of a site. This entails validating that the user account name is correct via the 'Check Names' button, then giving them 'Contribute' permissions. Once this is done they appear as a user on the 'All People' page. This works fine and the user is able to access the site. At some point in the future (sometimes several days later) the user account is somehow removed as a valid user from the site. This site resides in a test environment so access is pretty well controlled; which has allowed us to rule out someone else going in and removing the user manually. This appears to be something that is being done by the system itself and we have no idea why. We can manually add the user back, but then it will eventually get removed again later. I have an admittedly limited understanding of SharePoint permissions, but I believe that SharePoint stores valid users in a SQL database and I would assume that when dealing with Active Directory accounts it would be storing the user name and probably the SID. It appears that for some reason this record is later getting deleted out of the database, as the users will suddenly disappear from the "All People" page and will start getting "Access Denied: You are not authorized..." messages when trying to access the site. Has anyone seen this behavior before?

    Read the article

  • SSH sometimes screws up connection when terminal overflows?

    - by SeveQ
    I've got a problem with SSH on a Debian Lenny based server (it's a vHost within a Xen environment, booted on a Xen kernel). I hope someone can help me with this. The SSH connection seems somehow getting screwed up frequently when the terminal overflows (new lines beyond the bottom of the terminal, usually forcing it to scroll). The connection gets lost but not regularly disconnected. It nearly always happens when I do the following: an existing SSH connection gets disconnected (regularly) I order putty to reestablish the connection login-prompt appears at the very bottom of the putty terminal window I enter my login-name, press the enter key I'm asked for the password, I enter it, press the enter key and BOOM! Nothing more happens. I have to reconnect again. So it is reproducable. I'm not totally sure if the connection crashes before or after I enter the password. Furthermore it also happens when there is much text to be displayed (for example when I compile something or do an ls -l on a directory with many entries). Using 'screen', however, helps to reduces the frequency of occurence but doesn't solve the problem completely. It's occurence is independent from which terminal software I use. I mostly use putty but it also happens with other clients. I certainly hope somebody can help me solving this problem. Thanks in advance! //edit: I've just made a Wireshark trace of the ssh connection and there is nothing, I repeat, nothing different between the working and the failing connection (at least aside from frame numbers, ports and times that obviously can't be equal). This leads me to the assumption that the error has to happen on the server's side.

    Read the article

  • Using my old PC as a web/file server?

    - by Garrett
    I have an old desktop computer that I've been trying to sell for AGES. I guess nobody is looking for computers because it was advertised at a dirt cheap price on craigslist, local papers, etc. Anyways, I was wondering if it would be worth it to set it up as a home file server, a web dev server (I have a web host for actual production use), and maybe host a few server applications (ex: ventrillo). The computer is actually an old Dell that I cannibalized after the motherboard being destroyed by lightning, so it has fairly new parts in it. The specs are: P4 3.4GHz w/ HT and Artic Cooling Freezer 7 3GB DDR2 533 RAM 80GB hdd (will upgrade the hard drive if it's even worth using as a server) basic dvd rom 430 Watt Thermaltake PSU (it might be important to note that it is only 60% efficiency) ATI Radeon x600 256MB Antec 300 case It's not a really beefy machine, I just can't see giving it away or putting it in the corner to just collect dust. I have Windows Server 2008 R2 Standard and I am confident in my skills in operating most Linux operating systems. I'd also be using it to tinker with when I learn new things in my server admin classes (I'm finishing my 2nd year in college at the moment so I'm still learning) Also, my house is quite old and the electrical wiring is pretty poor (it MIGHT be up to code, then again, where I live most people don't even know what regulations are or let alone know how to spell it...) Would it be safe to leave it running all day and is it going to run up my electric bill because of the PSU efficiency? I only have 5mbit cable internet, but I won't be running very bandwidth intense services on it so it should be ok. I should elaborate on why I am concerned about the power. The circuits should be fine, but I'm more concerned about fire hazard. What is the likelihood that the server could cause an electrical fire? Again, thank you all for the feedback!

    Read the article

  • Why Is Volume Shadow Copy Services stopping?

    - by David Mackintosh
    I am running Windows 7 Professional, 64-bit. I am running a backup-over-the-internet software client which depends on the Volume Shadow Copy Services running. Since I installed Service Pack 1 (or rather, didn't object when Windows Update forced Service Pack 1 on me) the backup service is failing to back everything up because VSC isn't running. Most of the time it fails to back up such noise as the Security Essentials database or the Messenger Live contact list -- stuff I really don't care about -- but I don't want to fall into the trap of accepting an Error-state backup as "normal". At the recommendation of the backup software, I have set the VSC service startup mode to be Automatic. When I look in the Event Log, System channel I can see at boot time: The Volume Shadow Copy service entered the running state. ...and then two or three minutes later: The Volume Shadow Copy service entered the stopped state. How do I figure out why VSC is stopping? At the suggestion of the backup vendor, I have already followed the suggestions from http://support.microsoft.com/default.aspx/kb/940184 net stop SENS net stop EventSystem net start EventSystem net start SENS net stop COMSysApp net stop SwPrv net stop VSS cd /d C:\Windows\system32 regsvr32 ole32.dll /s regsvr32 oleaut32.dll /s regsvr32 vss_ps.dll /s vssvc /register /s regsvr32 /i swprv.dll /s regsvr32 /i eventcls.dll /s regsvr32 es.dll /s regsvr32 stdprov.dll /s regsvr32 vssui.dll /s regsvr32 msxml.dll /s regsvr32 msxml3.dll /s regsvr32 msxml4.dll /s net start SwPrv net start VSS net start ProtectedStorage ...and per http://support.microsoft.com/kb/940184 I have deleted the key tree HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\EventSystem\{26c409cc-ae86-11d1-b616-00805fc79216}\Subscriptions I have also run chkdsk /F and chkdsk /R on both permanent hard disks. (I had a similar problem with another computer (same OS, same failure, same start point after SP1 install) but the problem went away when I forced Volume Shadow Copy Services to Automatic startup rather than Manual. I did not have to resort to following the Microsoft KB instructions.)

    Read the article

  • Cache-control for permanent 301 redirects nginx

    - by gansbrest
    I was wondering if there is a way to control lifetime of the redirects in Nginx? We would liek to cache 301 redirects in CDN for specific amount of time, let say 20 minutes and the CDN is controlled by the standard caching headers. By default there is no Cache-control or Expires directives with the Nginx redirect. That could cause the redirect to be cached for a really long time. By having specific redirect lifetime the system could have a chance to correct itself, knowing that even "permanent" redirect change from time to time.. The other thing is that those redirects are included from the Server block, which according the nginx specification should be evaluated before locations. I tried to add add_header Cache-Control "max-age=1200, public"; to the bottom of the redirects file, but the problem is that Cache-control gets added twice - first comes let say from the backend script and the other one added by the add_header directive.. In Apache there is the environment variable trick to control headers for rewrites: RewriteRule /taxonomy/term/(\d+)/feed /taxonomy/term/$1 [R=301,E=expire:1] Header always set Cache-Control "store, max-age=1200" env=expire But I'm not sure how to accomplish this in Nginx.

    Read the article

  • Setting up a Network Bridge on Linux VM (Windows 7 Host)

    - by GrandAdmiral
    I would like to use NetEm to simulate a low bandwidth environment while testing an Internet-connected device. My plan is to setup a bridge in a Linux VM (Linux Mint 13) on a Windows 7 host. Unfortunately I'm having trouble setting up the bridge. Then I can use NetEm in the Linux VM to limit the bandwidth to an external device. I went with the following script: ifconfig eth0 0.0.0.0 promisc up ifconfig eth1 0.0.0.0 promisc up Then create the bridge and bring it up: brctl addbr br0 brctl setfd br0 0 brctl addif br0 eth0 brctl addif br0 eth1 dhclient br0 ifconfig br0 up When I run that script, I see the following warning: Rather than invoking init scripts through /etc/init.d, use the service(8) utility, e.g. service smbd reload Since the script you are attempting to invoke has been converted to an Upstart job, you may also use the reload(8) utility, e.g. reload smbd The device connecting to the bridge is able to obtain an IP Address, but it can only ping the IP Address of the bridge (both are 10.2.32.xx). Then after a few minutes, other parts of our network go down. I'm not sure why, but once I kill the bridge the network is fine. Is it possible to setup a network bridge in a Linux VM? Do I need to do something else with the dhclient br0 part of the script? By the way, I'm using VirtualBox. The wired connection is eth0 and the wireless connection is eth1. The wired connection is connecting to the device and the wireless connection is going to the network. Both adapters are set up as bridged adapters with promiscuous mode set to "allow all".

    Read the article

  • Generating new SID for Windows 7 cloned partition in Linux?

    - by Jack
    So I've read that the proper way to clone a Windows 7 partition is to run a Sysprep after the clone is complete. For MANY reasons, this is not possible the way we are cloning these drives (long story short, the drive should be fully up and running after we clone it, with all the settings already there and requiring no user intervention; and no, not even an answer file would work because the way we customize all the Win7 settings is complex and we do not want the user touching the settings). I understand Microsoft will not support Windows 7 clones if it is not sysprepped and that is fine for us. Acronis recovery tools get around this by ticking an option called "Create new NT signature", which resets the SID and GUID on any restore. Symantec has a tool called Ghostwalker which does the same thing. However, we are looking for a way to do this in Linux because we want to use open source tools to do the imaging (fsarchiver, partclone, etc. basically the same tools Clonezilla uses internally to clone NTFS partitions). The question is, if we clone using these tools in Linux, how would we generate a new SID thereafter (without the use of sysprep)? Is there any way to do it within a Linux environment? The whole image process is automated so if it is a simple command that I can just throw in my shell script, that would be even better. Of course, it would be nice to know if this is even possible. Any ideas? EDIT: Forgot to mention that the target machines we are restoring the image on are EXACTLY the same.

    Read the article

  • Syncing Multiple Google Calendars and with Outlook and Android

    - by Fred Thomas
    Perhaps this is a multipart question, but I deal with a lot of calendars in my life, and want to know if there is some way to sync them all together, and maintain appropriate privacy. So I have a family calendar that my ex and I maintain for kid events, and I have a personal calendar for my own life, and I have an Outlook work calendar, for work. Ideally I'd look at my calendar on my Android phone. Is it possible to sync them all together? Is it possible for there to be one calendar to rule them all on my phone, but have the other calendars blank out spaces that are from other calendars, but only show the blanked out without the details. (I don't want my date with Miss Hottie to appear that way on the family calendar, and I probably don't want my visit to the proctologist to appear in the corporate exchange server.) Are there tools available to do this? Bonus question, can I do the same with my to do lists? Double bouns question -- how can I solve world hunger and help us to all live together in peace? :-)

    Read the article

  • Backup plan for linux webserver in small business?

    - by radman
    Hi, I am currently in the process of writing a backup plan for the webserver in use by my business. I am very new to this area and have a few ideas about how things should work but am unsure of what tools to use and what sort of restore process is appropriate. I'm looking for something relatively simplistic and it doesn't have to be 100% paranoid just enough to give me a reliable backup. Speed is not of the essence and there is not going to be a live fallback in place. The backup will be onto a single hdd that will be stored onsite (no option for offsite as yet). Backups will be taking place weekly. I am constrained by both time and money which is why I'm aiming for a good enough solution. Is taking an image of the webserver system drive periodically and using that as the backup appropriate? Should I be testing that the backups restore correctly every time that I perform one? This is a bit broad but what setup would you use if you were in my place, given the services I am running? Should I add additonal machines and split the services? Any advice is much appreciated! See below for server details Webserver Platform Linux Ubuntu server Running mail-server svn-server mediawiki wordpress apache-webserver Hardware single 500gb sata drive Architecture Single machine behind router (with firewall) accessible to the internet.

    Read the article

  • CloneZilla Broke My System? Ubunut Installation Lost After Running CloneZilla

    - by nicorellius
    I just read through this post and tried to get my installation back using this answer to no avail. What happened to me is this: I spent an hour or more reading through the CloneZilla docs. I thought I was ready to test it out so I burned the disc with the ISO image on it and ran it. The system I used was Ubuntu 10.04, 32-bit. Everything seemed to go fine. I made a clone of my first partition and copied it to my second partition. I followed the instructions, removed the disc and rebooted my system. At this point, I would expect to have two bootable Linux installations, identical to one another. However, upon booting, I got this error message: error: no such device: 4cf1a6ef-xxxx-xxxx-xxxx-4e3a3ce92bcd error: file not found I booted from a Live Ubuntu disc and was able to see my to partitions: 4cf1(1) and 4cf1(2) (abbreviated, because the volumes have long numbers to identify them). The 50 GB partition, on which the original Ubuntu installation sits is the number and the second partition (175 GB) is the same number with an "_" at the end. I could browse the disc partitions and see the files, but I'm not sure what to do next. I know there is a way to restore my grub loader and actually boot either of these installations, but my Linux know-how is limited. Can I edit the boot loader file to fix this problem? The only clue I have is CloneZilla said something about making a new GRUB but I thought it was going to basically modify it so I could boot either installation. Not sure what happened. I am going to look through this post for the time being to see if I can learn anything to help my problem. But I thought that, since this happened as a result of using CloneZilla, it may be a unique question for this board.

    Read the article

  • Virtual Machine Network Architecture, Isolating Public and Private Networks

    - by Mark
    I'm looking for some insight into best practices for network traffic isolation within a virtual environment, specifically under VMWARE ESXi. Currently I have (in testing) 1 hardware server running ESXi but i expect to expand this to multiple pieces of hardware. The current setup is as follows: 1 pfsense VM, this VM accepts all outside (WAN/internet) traffic and performs firewall/port forwarding/NAT functionality. I have multiple public IP addresses sent to the this VM that are used for access to individual servers (via per incoming IP port forwarding rules). This VM is attached to the private (virtual) network that all other VMs are on. It also manages a VPN link into the private network with some access restrictions. This isn't the perimeter firewall but rather the firewall for this virtual pool only. I have 3 VMs that communicate with each other, as well as have some public access requirements: 1 LAMP server running an eCommerce site, public internet accessible 1 accounting server, access via windows server 2008 RDS services for remote access by users 1 inventory/warehouse management server, VPN to client terminals in warehouses These servers constantly talk with each other for data synchronization. Currently all the servers are on the same subnet/virtual network and connected to the internet through the pfsense VM. The pfsense firewall uses port forwarding and NAT to allow outside access to the servers for services and for server access to the internet. My main question is this: Is there a security benefit to adding a second virtual network adapter to each server and controlling traffic such that all server to server communication is on one separate virtual network, while any access to the outside world is routed through the other network adapter, through the firewall, and on the the internet. This is the type of architecture i would use if these were all physical servers, but i'm unsure if the networks being virtual changes the way i should approach locking down this system. Thank you for any thoughts or direction to any appropriate literature.

    Read the article

  • virtual memory commited

    - by vinu
    After a server bounce happens, and after around 40-45 days time period, we receive continuous “Committed Virtual Memory” alerts which indicates the usage of swap space in the magnitude of 4GB This also causes the application to perform very slowly and experience a number of stalled transactions. Server Setup: 4 Tomcat Servers (version 7.0.22) that are load balanced (not clustered) by 2 Apache Servers. And the Apache servers themselves supply static content and routing to these 4 tomcat servers. Java Runtime Version: java version "1.6.0_30" Java(TM) SE Runtime Environment (build 1.6.0_30-b12) Java HotSpot(TM) 64-Bit Server VM (build 20.5-b03, mixed mode Memory Startup Parameters: MEMORY_OPTIONS="-Xms1024m -Xmx1024m -Xss192k -XX:MaxGCPauseMillis=500 -XX:+HeapDumpOnOutOfMemoryError -XX:MaxPermSize=256m -XX:+CMSClassUnloadingEnabled" Monitoring – Wily monitoring is available in all the production servers that monitors key server parameters and sends out configurable alert emails based on pre defined settings. Note: Each of the servers also has two other separate tomcat domains that run different applications Investigated area: There is no Heap Memory Leak and the GC is running fine without any issues over any period of time The current busy thread count corresponds directly to the application usage – weekends and nights have lesser no. of threads compared to business hours ThreadLocal uses a WeakReference internally. If the ThreadLocal is not strongly referenced, it will be garbage-collected, even though various threads have values stored via that ThreadLocal. Additionally, ThreadLocal values are actually stored in the Thread; if a thread dies, all of the values associated with that thread through a ThreadLocal are collected. If you have a ThreadLocal as a final class member, that's a strong reference, and it cannot be collected until the class is unloaded. But this is how any class member works, and isn't considered a memory leak. The cited problem only comes into play when the value stored in a ThreadLocal strongly references that ThreadLocal—sort of a circular reference. In this case, the value (a SimpleDateFormat), has no backwards reference to the ThreadLocal. There's no memory leak in this code. Can anyone please let me know what could be the cause of this and what to be monitored?

    Read the article

  • Nexus One WiFi connection problems.

    - by sunocky
    I have two new Nexus One for a research project. For the projects I need to keep a server running on the phone. But soon I found out that both the phones have inconsistent WiFi connection problems at my home. It can connect to my WiFi network, but will drop off in a random time. And in order to reconnect to my WiFi, I may need to reboot my router, or the phone will say "obtaining IP address" and then "Unsuccessful". I also own a G1 with firmware version 1.6, it has no such connection problems. Well, to my surprise, the two Nexus One works fine with connecting to the WiFi network at my work place, which is a WEP type WiFi connection. By the way, it is a WPA type connection at my home. Anyone knows what's the problems with the Nexus One? Any suggestions on what should I do if I want to keep the WiFi connection live all the time at my home? Thanks very much!

    Read the article

  • Apache reports a 200 status for non-existent WordPress URLs

    - by Jonah Bishop
    The WordPress .htaccess generally has the following rewrite rules: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> When I access a non-existent URL at my website, this rewrite rule gets hit, redirects to index.php, and serves up my custom 404.php template file. The status code that gets sent back to the client is the correct 404, as shown in this HTTP Live Headers output example: http://www.borngeek.com/nothere/ GET /nothere/ HTTP/1.1 Host: www.borngeek.com {...} HTTP/1.1 404 Not Found However, Apache reports the entire exchange with a 200 status code in my server log, as shown here in a log snippet (trimmed for simplicity): {...} "GET /nothere/ HTTP/1.1" 200 2155 "-" {...} This makes some sense to me, seeing as the original request was redirected to page that exists (index.php). Is there a way to force Apache to report the exchange as a 404? My problem is that bogus requests coming from Bad Guys show up as "successful requests" in the various server statistics software I use (AWStats, Analog, etc). I'd love to have them show up on the Apache side as 404s so that they get filtered out from the stat reports that get generated. I tried adding the following line to my .htaccess, but it had no effect (I'm guessing for the same reason as the previous redirect rules): ErrorDocument 404 /index.php?error=404 Does anyone have a clever way to fix this annoyance? Additional Info: OS is Debian 6.0.4, and Apache version looks to be 2.2.22-3 (hosted on DreamHost) The 404 being sent back to the client is being set by WordPress (i.e. I'm not manually calling header() anywhere)

    Read the article

  • Build suggestion for mini itx server [on hold]

    - by Spyros
    I am looking to create a home server and since there does not seem to be a nice ready made one to purchase, i thought of building a mini itx one. I am not familiar with the best hardware/case to pick and therefore i thought i would ask for your experience. I will most probably install a simple linux distro, probably Debian and most probably without a desktop environment as well. Therefore, I am not looking for the best graphics card, though a decent one is preferred for sure. I would say that the most important thing is that the case/hardware play well with each other and the machine has decent specs ( i think something like 8gb RAM and 1TB hdd is most probably my most important things). I understand that this question can be closed as subjective, but it would be a great help for me if some of you guys who have been doing a similar build, can suggest something. I tried to find a stack exchange variant for the question, but it seems that this one more closely relates to the subject. But I will definitely understand it if people close it due to being subjective in nature. If you can maybe move it somewhere or so, please feel free to do so. Thank you for your time :)

    Read the article

  • How To Replace Laptop HDD Without Losing Data?

    - by Ishan
    Hello, I recently went to Dell Service center and they tell that HDD is faulty and needs to be replaced. I have a Studio 1457 laptop with 500 GB HDD and don't want to lose the data(purchased in May 2010, still under warranty). I have searched a bit and I think it may be best to use a disk imaging software for this task. However, I don't know about a good software. I have following steps in mind: Get a 1 TB External HDD. Make an image of existing 500 GB HDD and store data on external disk. Install new HDD and install a brand new Windows copy and then install the software on it. Using the same software I used to make image, restore the old HDD image on new one. However, I have some questions in mind. First, is this possible? Second, I live in a country where piracy is a big issue and I am sure the support executive who will come to change HDD will have a pirated copy. But I have genuine Windows 7 Pro and don't want to lose it. Now, Dell does not supply and OS disks, so I can't install it on new HDD! If I follow above steps, which version of Windows 7 will be retained? One in the image(authentic) or one in the new HDD(pirated). I am ready to purchase a good software for this task and my budget is $50-60. Since laptop is under warranty, new HDD will be free. One last thing, I have created a Windows Migration file whose size is 70 GB. Can it be used to move from Windows 7 Pro to Windows 7 Pro?(In case I get a genuine copy of Windows 7!) Any other method to save all the data? Thanks in advance.

    Read the article

  • Backing up default windows installation with dd from linux running on another partition - is this fe

    - by Marek
    I am preparing to reinstall my system. I am thinking about creating a multi boot with a linux distro+Windows 7 to choose from when starting up. I would love to be able to skip all the hassle of reinstalling Windows and all programs when it starts becoming too slow in the future, thus I would like to mirror my fresh Windows system partition with some programs preinstalled. I am thinking about installing Ubuntu, making a partition for windows, installing windows with the basic environment (Visual Studio, Office, etc.) then booting into Linux and making an image of the windows partition with dd. I am not familiar with linux at all so I am a little afraid something may go wrong along the way. Is it possible to do it this way? Will I be able to partition my existing disk for multi boot easily after I install Ubuntu? Will I be able to recover the Windows partition easily using dd when I will need to re-create a fresh windows partition in the future? What other (better) approach can you recommend to achieve the goal of easy disk mirroring (for free)?

    Read the article

  • Problem with deploying django application on mod_wsgi

    - by Shehzad009
    Hello, I seem to have a problem deploying django with mod_wsgi. In the past I've used mod_python but I want to make the change. I have been using Graham Dumpleton notes here http://code.google.com/p/modwsgi/wiki/IntegrationWithDjango1, but it still seem to not work. I get a Internal Server Error. django.wsgi file: import os import sys sys.path.append('/var/www/html') sys.path.append('/var/www/html/c2duo_crm') os.environ['DJANGO_SETTINGS_MODULE'] = 'c2duo_crm.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() WSGIScriptAlias / /var/www/html/c2duo_crm/apache/django.wsgi Apache httpd file: <Directory /var/www/html/c2duo_crm/apache> Order allow,deny Allow from all </Directory> In my apache error log, it says I have this error This is not all of it, but I've got the most important part: [Errno 13] Permission denied: '/.python-eggs' [Thu Mar 03 14:59:25 2011] [error] [client 127.0.0.1] [Thu Mar 03 14:59:25 2011] [error] [client 127.0.0.1] The Python egg cache directory is currently set to: [Thu Mar 03 14:59:25 2011] [error] [client 127.0.0.1] [Thu Mar 03 14:59:25 2011] [error] [client 127.0.0.1] /.python-eggs [Thu Mar 03 14:59:25 2011] [error] [client 127.0.0.1] [Thu Mar 03 14:59:25 2011] [error] [client 127.0.0.1] Perhaps your account does not have write access to this directory? You can [Thu Mar 03 14:59:25 2011] [error] [client 127.0.0.1] change the cache directory by setting the PYTHON_EGG_CACHE environment [Thu Mar 03 14:59:25 2011] [error] [client 127.0.0.1] variable to point to an accessible directory.

    Read the article

< Previous Page | 546 547 548 549 550 551 552 553 554 555 556 557  | Next Page >