Search Results

Search found 17336 results on 694 pages for 'developer events'.

Page 565/694 | < Previous Page | 561 562 563 564 565 566 567 568 569 570 571 572  | Next Page >

  • nginx: js file loads indifferently every refresh

    - by poymode
    I have this nginx problem wherein a js file in a rails app loads indifferently. Whenever I try to access the JS file in the browser and refresh the page, the scrollbar changes length meaning sometimes it loads half the js page, sometimes the whole and sometimes just a part of it. the js file size is 71K. my nginx server is on different server,separate from my rails app. when I try to access the js file directly through the app server, lets say 10.48.30.150:3000/javascripts/file.js it works fine and doesnt show any half-loaded page. but when I use the nginx server which upstreams the rails app, it shows the indifferent page loads. here is my nginx http conf error_log /usr/local/nginx/logs/error.log; pid /usr/local/nginx/logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 256; access_log /usr/local/nginx/logs/access.log; sendfile on; #tcp_nopush on; keepalive_timeout 0; tcp_nodelay on; #gzip on; #gzip_min_length 4096; #gzip_buffers 16 8k; #gzip_types application/x-javascript text/css text/plain; large_client_header_buffers 4 8k; client_max_body_size 2G; include /usr/local/nginx/conf.d/*.conf; }

    Read the article

  • How to set an executable white list?

    - by izabera
    Under Linux, is it possible to set a white-list of executables for a certain group of users? I need them to be unable to use, for example, make, gcc and executables on removable disks. How can this be done? Edit, let me explain better. I'm dealing with a high school IT system, young geeks that (during the lessons) want to play, surf the net, damage those computer however they can. The major step to achieve this goal was to remove the system they're familiar with and install Ubuntu in all the computers. This actually works quite well, but recent events proved that this is not enough. I want to allow them to execute certain safe programs, like Open Office, and to deny any other program, whether it is preinstalled software, something they carry in usb drives, a downloaded program or a script they program on site. It's possible to remove the 'x' permission on any file on the pc, but of course it would be impractical. Furthermore, they would be able to run anything they download. I thought the best solution would be to make a white-list of safe programs and to deny anything else, but I don't really know how to do it. Any idea is helpful.

    Read the article

  • Computer not finding hard drives on boot -sometimes-

    - by todd.pund
    Computer specs: Mobo: Gigabyte ultradurable 3 - GA-970A-UD3 Processor: First gen I7 3.2GHZ Ram: 8GB Kingston DDR3 1066 Video Card: EVGA NVidia GTX 460 1GB Hard Drive: 500MB 7200rpm x2 (can't remember brand, sorry I'm at work.) Last week my developer preview for Windows 8 ran out so I put my copy of windows 7 back on the computer. The computer at that point started suffering from frequent freezing and crashing. When I rebooted the computer sometimes it wouldn't find the system HD at all. When I looked at the post screen it seemed to show that it wasn't finding either of the HDs. Then yesterday when turning on the computer I just got GRUB as a message (not a GRUB prompt, just GRUB) I haven't had a dual boot of Linux for at least a year. I loaded windows 7 recovery console from the disk and ran: bootrec /fixboot bootrec /fixmbr Which did not help. At that point I just installed Ubuntu 13.04 over the windows 7 install and still received the GRUB post. I went into the BIOS and switched the Hard Drive priorities and then it loaded into Ubuntu fine. For several days everything was just hunky dory until I installed the Ubuntu version of Steam, install Portal and tried to run it. At that point the computer froze and after hard rebooting couldn't find the hard disks again. Then after restarting the system it loaded up fine again and no issues since. (I have not tried to launch portal again). My next thought is to remove the system hard drive and try to use the secondary as the master to see if the primary HD is bad. I'm sorry if this has been confusing, I'll answer any questions I can. Any thoughts?

    Read the article

  • Rails 3 + Nginx + Passenger -- Routing index

    - by Bijan
    I have no index.html file in my public folder. My rails routes file routes this, and it works fine when I run 'rails server' on my machine. I'm trying to deploy the app. I have passenger and nginx running When I run rails server on my local machine, it works fine. But it's just trying to access static file when I try to access it on the production server. Here's my nginx conf: worker_processes 1; #pid logs/nginx.pid; events { worker_connections 1024; } http { passenger_root /usr/lib/ruby/gems/1.9.1/gems/passenger-3.0.2; passenger_ruby /usr/bin/ruby; include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; server_name mmjconsult.com; root /www/mmjs/public; access_log logs/host.access.log; passenger_enabled on; } } Thank you for any help. I really appreciate it.

    Read the article

  • Only one domain is not resolving via Windows DNS server at multiple locations, but is at others

    - by Brett G
    I'm having quite a weird issue. Had mail delivery issues to a specific domain. After looking closer, I realized that the DNS for that domain isn't resolving via the in-house Windows 2003 SP2 DNS server. C:\>nslookup foodmix.net Server: DC.DOMAIN.com Address: 10.1.1.1 DNS request timed out. timeout was 2 seconds. DNS request timed out. timeout was 2 seconds. *** Request to DC.DOMAIN.com timed-out (DC.DOMAIN.com and 10.1.1.1 are generic values to replace the actual ones) Even if I run this nslookup from the DC.DOMAIN.com server, I get the same result. However, all other requests are working as they should. I had a sysadmin friend try this DNS lookup on servers at several companies that he consults for (which are also Windows 2003 AD servers). The weird thing is some of these were having the same exact issue. However using public DNS servers work. I have tried clearing the DNS cache, restarting the server, restarting the services, etc. Nothing has worked. One weird event I noticed in the DNS Server Event Logs that might be related is an event ID of 5504 with the following description: The DNS server encountered an invalid domain name in a packet from 192.33.4.12. The packet will be rejected. The event data contains the DNS packet. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. In the data section below, I can see the following mentioned: ns2.webhostingstar.com Which happens to be the nameserver for the domain in question. Several discussion threads and a MS KB have pointed to disabling EDNS. I have done this via "dnscmd /config /enableednsprobes 0" and it has not fixed the issue.

    Read the article

  • NGINX: How do I calculate an optimal no. of worker processes and worker connections?

    - by bodacious
    Our web app is running on a Linode 2048 server at the moment (~ 2048 GB of RAM) The MYSQL database is on another linode of it's own so this server is really only handling NGINX and and the Rails application. The application itself uses about 185976 of memory per instance (RSS). Our traffic is < 1000 per day and the pages are mostly cached so there are fewer hits to the rails app itself. My question is - how can I calculate optimal NGINX config settings for my app? Below is the current config: worker_processes 1; # pid of nginx master process pid /var/run/nginx.pid; events { worker_connections 1024; } http { access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; passenger_root /home/user/.rvm/gems/ree-1.8.7-2011.01@URTV/gems/passenger-3.0.3; passenger_ruby /home/user/.rvm/rubies/ree-1.8.7-2011.01/bin/ruby; include mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay on; # gzip settings gzip on; gzip_http_version 1.0; gzip_comp_level 2; gzip_vary on; gzip_proxied any; gzip_types text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; # load extra modules from the vhosts directory include /opt/nginx/vhosts/*.conf; } Any advice would be appreciated! :)

    Read the article

  • Standing and sitting while using a computer at work

    - by Adam Batkin
    I would like to be able to comfortably switch between sitting and standing while at work (I'm a software developer, so I spend most of my day in front of a computer). For the past couple of months I have been using a large elevated stand that sits on my desk (designed expressly for this purpose) containing my keyboard and mouse, and my monitors have been raised as high as possible and aimed upwards. So I can stand all day and I'm pretty comfortable (my right wrist may be at too much of an angle when it's on my mouse, but that's a separate issue). The only problem is that sometimes I want to be able to sit. I can easily place my keyboard and mouse back down under the elevated stand, but I have to look up pretty steeply and that is uncomfortable and makes it difficult to see the screens since they are tilted upwards. My monitor mounts are difficult to adjust quickly/easily, so I can't just re-aim them. I would of course love one of those hydraulic standing/sitting desks (cost isn't the problem). But I'm in a row of "trader-style" desks where it's basically a very long surface with people sitting at 6-foot intervals. What type of equipment do you recommend? I suppose the best thing would be some sort of monitor stand (it must be able to hold 2-3 LCDs) that can easily be lowered and raised. But any other suggestions are also welcome.

    Read the article

  • RAID degraded on Ubuntu server

    - by reano
    We're having a very weird issue at work. Our Ubuntu server has 6 drives, set up with RAID1 as follows: /dev/md0, consisting of: /dev/sda1 /dev/sdb1 /dev/md1, consisting of: /dev/sda2 /dev/sdb2 /dev/md2, consisting of: /dev/sda3 /dev/sdb3 /dev/md3, consisting of: /dev/sdc1 /dev/sdd1 /dev/md4, consisting of: /dev/sde1 /dev/sdf1 As you can see, md0, md1 and md2 all use the same 2 drives (split into 3 partitions). I also have to note that this is done via ubuntu software raid, not hardware raid. Today, the /md0 RAID1 array shows as degraded - it is missing the /dev/sdb1 drive. But since /dev/sdb1 is only a partition (and /dev/sdb2 and /dev/sdb3 are working fine), it's obviously not the drive that's gone AWOL, it seems the partition itself is missing. How is that even possible? And what could we do to fix it? My output of cat /proc/mdstat: Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md1 : active raid1 sda2[0] sdb2[1] 24006528 blocks super 1.2 [2/2] [UU] md2 : active raid1 sda3[0] sdb3[1] 1441268544 blocks super 1.2 [2/2] [UU] md0 : active raid1 sda1[0] 1464710976 blocks super 1.2 [2/1] [U_] md3 : active raid1 sdd1[1] sdc1[0] 2930133824 blocks super 1.2 [2/2] [UU] md4 : active raid1 sdf2[1] sde2[0] 2929939264 blocks super 1.2 [2/2] [UU] unused devices: <none> FYI: I tried the following: mdadm /dev/md0 --add /dev/sdb1 But got this error: mdadm: add new device failed for /dev/sdb1 as 2: Invalid argument Output of mdadm --detail /dev/md0 is: /dev/md0: Version : 1.2 Creation Time : Sat Dec 29 17:09:45 2012 Raid Level : raid1 Array Size : 1464710976 (1396.86 GiB 1499.86 GB) Used Dev Size : 1464710976 (1396.86 GiB 1499.86 GB) Raid Devices : 2 Total Devices : 1 Persistence : Superblock is persistent Update Time : Thu Nov 7 15:55:07 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Name : lia:0 (local to host lia) UUID : eb302d19:ff70c7bf:401d63af:ed042d59 Events : 26216 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 0 0 1 removed

    Read the article

  • Whats the difference between pulling from a branch into master and pushing that branch onto master?

    - by Justin808
    In Tortoisegit, on the repository, I right-click and select sync. At the top of the dialog there are options for Local Branch and Remote Branch. If the local branch is named DeveloperA and the remote branch is master and I do a push, what happens? If the local branch is master and remote branch is DeveloperA and I Pull, what happens? If I am on the master branch and right click, select Merge and change the From to be my DeveloperA branch, what happens? If I try to push from master to remote master and the remote is updated git stops and tells me to pull. It seems if I push from DeveloperA to master it doens't stop, it just clobbers, it that correct? We're having an issue using git where the remote master branch gets clobbered at times and we are trying to figure out why. For example there is a developer working on his DeveloperA branch. He'll pull from master to get any updates, then push to master to push out his changes. But there are times that the push lists more files in the Out Commit list than he's edited. The odd thing is he can't revert those files as git is saying they are up to date and have not been modified. Yet when he pushes git pushes the files out. The problem is if there are changes between his pull and push the changes get clobbered.

    Read the article

  • MongoDB and GrifFS. What are the best storage options in the range of 1 TB?

    - by Nerian
    We are going to launch a service that will require between 1 and 2 GB for file storage per paid user. I am going to use GridFS for storing files. I am pondering the different options for storing the database. But since I am unexperienced at deployment and it is my first time with Mongodb I need your experience. Criteria: I want to spend my time developing my core business, that is, my own application. I am a Ruby on Rails developer. I do not like to mess with server configuration. Hence, I would like a fully managed hosting solution. But I would like to know about any other option, if you think it is worth it. It should be able to scale. Cloud style. Pay as you go. The lower the price, the better. So far I known of these services: https://mongohq.com/pricing https://mongomachine.com/pricing https://mongolab.com/about/pricing/ http://cloudcontrol.com/add-ons/mongodb/ And they seem to be OK for common needs, that is no file storage. But I am going to use GridFS, so the size matters. These services seems to scale, in price, quite poorly. MongoHQ: The larger plan max storage is 20 GB. Seems like a very little storage, for GridFS. MongoMachine: Flat price, 2.5$ per GB. I didn't found the limit. Seems like a good price, comparing the others. MongoLab: 3.984 GB max, which I don't think I will hit, so perfect. 8$ per GB, quite costly. CloudControl: The larger plan is 20 Gb. The custom service starts at 250€ plus some unspecified charge per GB. What is your experience with these services? Any downtimes? Other possibilities?

    Read the article

  • Application that will identify percentage of your system disk bandwidth used on a user-application by user-application basis?

    - by Warren P
    I always (subjectively) feel my computer is far too slow (however fast it is), and so I'm always looking for ways to measure and understand what my computer is actually doing, that is making it seem "slow" to me. It has been my observation that my software-developer workload is most often disk-bound (I am waiting for Disk I/O) more than CPU bound. What has made it worse, is that I am using a corporate PC that has in-memory active-scanning anti-virus software that I do not have control over, and also some IT department mandated services that seem to suck up a lot of available hard-disk bandwidth. The best tool I have seen (in Windows 7) is the Resource Monitor which I usually acess from the button in the task Manager. The disk IO page, however, seems to label Disk Activity at a very low level (for example, showing the Volume Shadow Storage, which is flushing information obviously written by something ELSE other than VSS itself, and then writes to Pagefile.sys, which are obviously due to Virtual Memory faults in some application). What I would like to know is if a utility exists that can add up all direct disk input and output by user-level process, or find the process or service that caused VM or VSS activity. In that way, I hope, you could establish a real idea of how much of your computer's precious disk subsystem bandwidth is attributable to a particular application. here's a scenario: MyApp.exe writes 100k/s and reads 100k/s directly. VSS ends up writing another 100k/s. pagefaults caused inside MyApp.exe cause another 100k/s of writes. So the total "cost" of MyApp.exe running, during a period of time (let's say 1 second) is 400k/s, whereas you can only directly observe half of that, in Resource Monitor. Is there a smarter disk-IO watching piece of software I can use?

    Read the article

  • Javascript loading never completes on many sites

    - by Joe
    I recently moved country and have found that on many websites the page never finishes loading. In some cases, no content is ever displayed, but the loading will never time out. Loading Developer Tools in Chrome shows me that it is the Javascript files which never load. For example, this BBC article will never load compatability.js, though will load all the other JS files perfectly. Google Maps often fails to finish loading, meaning it's impossible to make searches. There seems to be no pattern to which files will fail to load (i.e. they don't come from the same CDN). I have tried Chrome, Safari and Firefox on OSX 10.8, and Chrome on my girlfriend's OSX 10.7. I have similar issues on the iPad. In many cases, if I can go to the mobile version of the page that seems to load fine. I have run the browsers in private mode, disabled plugins, updated flash, cleared the cache, flushed the DNS cache - though it would seem that if this is happening on other devices, none of this would work anyway. Is this an ISP issue? And if so, why would it be limited to certain JS files and not all? JS files from the same domain work fine, so I'm not really sure what I should be looking for.

    Read the article

  • Fixed ruby/mysql connection with new libmysql.dll, and broke Apache in the process

    - by jmtoporek
    Ok so bit of background - all my development has been on a local Windows 7 machine. I had Apache with PHP/MySQL running with no issues. Been using ruby (1.9.3 and latest rails release 3.2.9) with built in webrick server, but had a devil of a time connecting to mysql. Did some research, updated my libmysql.dll file in c:/ruby/bin and it worked! Very happy... except now Apache stopped working. In my attempt to resolve the issue I found an older copy of libmysql.dll, renamed the new file, copied the old file back to c:ruby/bin and apache works, ruby does not. So I can take this ass backwards approach but obviously this seems pretty stupid. I was surprised that Apache was using the dll file in ruby/bin folder. I presume this is related to path variables perhaps? I guess I was hoping someone could direct me as to how I can use one dll file for apache and another for ruby. Or if you have some other smarter approach - I've smart enough to follow directions to install apache from scratch and enable php on windows as well as ubuntu, but I'm not much of a sys admin, just a semi competent web developer.

    Read the article

  • email attachments [closed]

    - by Alan Doolan
    My company currently use software on a local machine that will take an email from the email server, extract the attachment, rename it and then add it to a folder on a webserver using ftp. This works well but they are currently asking if it can be done 'in the cloud' or what they really mean, not local. Is there any thing that would do this on the server itself? I should clarify a bit. The attachements are various reports that are being sent to different email addresses (mostly google corporate and free accounts). We need the reports to be on a folder on a webserver so that internal pages can take the information in the reports (csv) and use it on the webpages or adds them to a separate database. The key part being that the files need to be in the particular folders. Though it does work to have a computer running software that will take the files, renames them to the required name and uploads them to the folder it relies too heavily on one computer working all the time. This is not something we can depend on at this point. I'll be honest, I'm a web developer and not strong with server systems past my particular standard requirements so this is beyond me. though yes, I am aware that my boss is not 100% sure what 'cloud' means but likes the word.

    Read the article

  • Solutions on how to use an OS X calendar as a more perfect time tracking solution for 5-10 users in a small agency?

    - by jnthnclrk
    I really like OS X's iCal. Entering events is easy with the mouse and it also gives you a very real visual sense of how long tasks take to complete. We often work remotely in our organisation, so we use a few shared calendars between key individuals to provide us with an overview of hours worked, availability & schedule conflicts without too much disruption to our various, hectic workflows. It really is a neat solution, especially on shared tasks. How many times have you tasked a remote colleague and then lost the thread on whether that task was completed or not? With shared calendars you get a much clearer idea of what your people are working on without having to pick up the phone or compose a chat. However, there are a few areas where this approach fails... iCloud syncing often needs to be re-jiggered The "view only" option on shared calendars does not seem to work, which makes all shared calendars editable by others There is no decent reporting with this workflow There is no task categorisation or tagging Things get very busy in iCal when working with more than 2 shared calendars I've looked at a few task management apps like Basecamp and Harvest, but nothing appears to let me edit my calendar natively and then sync with a 3rd party. Interested in solutions to improve the above workflow and enable us to elegantly increase the amount of users.

    Read the article

  • Advice for UPC/Surge Protector in home office

    - by Fred
    I'm just starting out as an independant developer, mostly Unix stuff with some Windows thrown in occasionally. I've been running two machines, a linux and a windows dev machine. Long story short, we had a bad storm come through last week and I unplugged one machine, forgot to unplug the other and the p/s and mobo ended up dead. Luckily I backup to an external service religiously (rsync.net for anyone interested), so there was no loss in data, but it did show me a glaring hole in my current setup, namely, lack of UPC and Surge Protection (this has honestly never been an issue before). Can anyone recommend a UPC/Surge Protector for a home office? It only needs to support a single machine (I opted to use vmware instead of rebuilding that machine), but it's a quad core Phenom 2 with a 1k watt p/s. This is outside my experience so I thought I'd get some input from others. I'm looking for something that's reasonably priced and does the job reasonably well. I don't need absolute 100% uptime, just something to protect my PC better than it is now.

    Read the article

  • htaccess hacked - i've deleted code and file - what next?

    - by user1762595
    My website was hacked recently. I think i've found the code that was added to the htaccess file, deleted it and then added script to prevent the htaccess file being accessed again. I've also deleted the php file that the hacked code refers to (common.php). What do i need to do next? I'm not a programmer or website developer but i really wanted to see if i could fix the problem myself as i've spent quite a few hours trying and don't give up easily. Here is the hacked code that i deleted; <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{HTTP_USER_AGENT} (google|yahoo) [OR] RewriteCond %{HTTP_REFERER} (google|yahoo) RewriteCond %{REQUEST_URI} /$ [OR] RewriteCond %{REQUEST_FILENAME} (shtml|html|htm|php|xml|phtml|asp|aspx)$ [NC] RewriteCond %{REQUEST_FILENAME} !common.php RewriteCond /home/httpd/vhosts/bluestardive.com/httpdocs/common.php -f RewriteRule ^.*$ /common.php [L] </IfModule> this code has to stay in the htaccess file as it redirects my url to seo friendly ones or the website errors, but has this code been hacked as well? # Apache search queries statistic module RewriteEngine On AddHandler php5-fastcgi .php .php5 # <contrexx> # <core_modules__alias> RewriteRule ^about-us$ /index.php?page=883 [L,NC] RewriteRule ^ausfluge-und-aktivitaten$ /index.php?page=800 [L,NC] RewriteRule ^bluestardive-news$ /index.php?page=919 [L,NC] RewriteRule ^bookings$ /index.php?page=911 [L,NC] RewriteRule ^diveresort$ /index.php?page=879 [L,NC] RewriteRule ^diving$ /index.php?page=880 [L,NC] RewriteRule ^excursions-and-activities$ /index.php?page=881 [L,NC] RewriteRule ^galerie$ /index.php?section=gallery [L,NC] RewriteRule ^oceannight$ http://www.bluestardive.com/index.php?page=906 [L,NC] RewriteRule ^philosophy$ /index.php?page=846 [L,NC] RewriteRule ^reservation$ /index.php?page=917 [L,NC] RewriteRule ^reservierung$ /index.php?page=918 [L,NC] RewriteRule ^resort$ /index.php?page=798 [L,NC] # </core_modules__alias> # </contrexx> many thanks for any help Claire

    Read the article

  • Server 2008 email on Event variables

    - by Jeff Miles
    One of the new features of Server 2008 is the ability to attach a task to a specific event in the event logs. One of the actions available is to send an email through a SMTP server. This is working great, however it would be ideal if in the message body, the Event contents could be placed. I have tried using $eventdescription and %eventdescription%, but those are just shots in the dark. Any amount of googling produces no results. Does anyone know if this is possible? Update: Sparks' suggestion below is a step in the right direction I believe, however that method doesn't seem to work for all values. For example, I can pull the RecordID, Severity and Channel as shown, but I can't use the same method to retreive the EventID, or most importantly the description. Here's the raw XML from one event: [Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"] [System] [Provider Name="DFSR" /] [EventID Qualifiers="16384"]4412[/EventID] [Level]4[/Level] [Task]0[/Task] [Keywords]0x80000000000000[/Keywords] [TimeCreated SystemTime="2009-05-14T18:18:09.000Z" /] [EventRecordID]45692[/EventRecordID] [Channel]DFS Replication[/Channel] [Computer]servername.domain.com[/Computer] [Security /] [/System] [EventData] [Data]9046C3F4-843E-4A53-B941-4B20764072E5[/Data] [Data]D:\departments\Geomatics\Plan Quality\Data Processing\CG3533017 2009-05-13 KT FIXED[/Data] [Data]D:\departments[/Data] [Data]{26D5F604-E603-4F87-8EC3-DE9A945DA8FD}-v927199[/Data] [Data]Departments[/Data] [Data]domain.ca\files\departments[/Data] [Data]B8242CE2-F5EB-47DA-BA5B-1DD2F7EE3AB9[/Data] [Data]DFAA7A54-66CB-4C31-81A0-0F861382C32C[/Data] [Data]CG3533017 2009-05-13-{26D5F604-E603-4F87-8EC3-DE9A945DA8FD}-v927199[/Data] [/EventData] [/Event] I have tried using a ValueQuery for EventData, but it returns no data.

    Read the article

  • linux shutdown hang with wifi cifs mounts

    - by Sirex
    Since fedora 15 (and now with 16) it seems that wireless clients take a long while to shutdown when they have network filesystems mounted at shutdown time. I've pushed out a cifs mount via puppet, and all clients have it, including those on wireless. If say a laptop is on a wired connection it shuts down just fine, but if its on the wifi at the time (and no wired connection) it'll hang at the fedora f logo. I'm not sure if its indefinite or just a really long while, but ill give it a test when i shut this machine down in a second. Needless to say its pretty annoying, so is there a way of causing the machine to shutdown even if network connectivity has been lost at unmount time, -- or an official way to reorder events so the wireless card is kept up until after the unmount happens during the shut down process (short of writing a custom script for shutdowns which is a bit of a kludge) ? It does this on multiple machines, and all started doing it when we went from fedora 14 to 15. It was such an obvious issue i'd kind of assumed someone must have reported it or there was an easy fix, but i've not discovered anything yet. Additional info: I can confirm that manually unmounting the mounts then shutting down (sudo shutdown or the xfce shutdown button) will shutdown just fine, it only hangs if the mounts are still mounted The puppet config that sets the mount looks like this (now with the _netdev entry that is indeed pushed to clients successfully, but makes no difference): file { "/mnt/share": ensure = directory,} mount { "/mnt/share": atboot = true, ensure = mounted, remounts = false, fstype = cifs, device = "//srv/share", options = "user,gid=shareusers,uid=${user},file_mode=0700,dir_mode=0700,credentials=/root/.smbcreds,_netdev", require = [ File["/mnt/share"], Group["shareusers"] ], } }

    Read the article

  • Laptop seemingly randomly "freezes" to the point of no longer executing applications

    - by Aierou
    After upgrading to Windows 8 pro on my Samsung Series 7 Chronos NP700Z5C-S04US (may be relevant, I'm not sure), my computer began to stop allowing the execution of any service or application, as well as discontinuing the update of the clock until a hard shutdown was performed. This seems to occur randomly after periods of inactivity and I've no idea the cause. These are measures I have already taken in order to attempt to stop this: -Obviously Googling potential answers to this problem -Updating all drivers -Researching all events that have occurred around the time of the failure to respond (with no results) -I tried applying "bcdedit /set disabledynamictick no" which was a hotfix for what seemed to be the same error but was not. Here is some more, potentially related, information about the error: -No BSOD (actually, I haven't at all experienced a BSOD with Windows 8) -Computer seems to have a problem shutting down/restarting most of the time (Hangs at the point where it should completely turn off) -New sound instances are not able to play, but previously loaded containers function properly -As mentioned before, the clock freezes at the time of the error -USB devices function properly -Servers that I was running fail to respond on my end, but stay online. If you require more information, please request it specifically and I will be happy to oblige. Thanks.

    Read the article

  • Is the Windows VPN secure?

    - by Tor Haugen
    I have used a few VPN solutions over the years. Most are hard to set up, slow to connect and/or rather ill-behaved (replacing system drivers, disrupting each other etc). One solution I have never used earlier is the one built into Windows. This is mostly because the infrastructure guys always refuse to use it because they claim it's 'not secure'. Now I have finally had the chance to use it (on Windows 7), and wow, it's a breeze! Easy to set up, well-behaved, it connects almost instantly, automatically authenticates with my logged-in credentials, and integrates excellently with the UI. I have to say, unless it really isn't secure, I'll be happy if I never have to use another VPN product ever again. I gather the Windows VPN used to rely on PPTP, which is not considered secure. But in Windows 7/2008, it supports L2TP/IPSec, SSTP and IKEv2, and authenticates with EAP or CHAP/CHAPv2. That seems pretty up-to-date to me. But I'm just a lowly developer. Can someone in the know give me the lowdown on this?

    Read the article

  • Mail-Merge on Steroids: Can Word 2003 do this?

    - by richardtallent
    I have a huge report to put together, made up of over 1,000 smaller, nearly-identical reports. Each report includes: General 1:1 information (basic mail-merge stuff) Lots of text, some of which may need to be disabled or have alternate text based on a boolean field. A few embedded images, preferably loaded via HTTP URL, but if they have to be on the a file system thing I can do that. (Filenames will be provided as a field in the data source.) Fortunately, all images are roughly the same size/shape. Several 1:m tables with a few fields apiece. The kicker is the master/child tables. I've seen examples for Word 2000 for doing this by left-joining the master and child table and using some IF/THEN logic to know whether to jump to the next master record. But in my case, I have several of these subtables, so that approach won't really work. So, can Word 2003 handle arbitrary master/child tables? If so, how? If not, I considered InfoPath, but I haven't used it before, and it seems to be made for data entry, not long formatted reports. I'm a software developer, so I could always hack something together with a massive VBA macro, or generating the report in HTML on the web server (where the data is coming from anyway). But I'm hoping Word will work without such gymnastics, since it will give the ultimate users of the report template better control over formatting and making minor changes.

    Read the article

  • Server 2008 email on Event variables

    - by Jeff Miles
    One of the new features of Server 2008 is the ability to attach a task to a specific event in the event logs. One of the actions available is to send an email through a SMTP server. This is working great, however it would be ideal if in the message body, the Event contents could be placed. I have tried using $eventdescription and %eventdescription%, but those are just shots in the dark. Any amount of googling produces no results. Does anyone know if this is possible? Update: Sparks' suggestion below is a step in the right direction I believe, however that method doesn't seem to work for all values. For example, I can pull the RecordID, Severity and Channel as shown, but I can't use the same method to retreive the EventID, or most importantly the description. Here's the raw XML from one event: [Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"] [System] [Provider Name="DFSR" /] [EventID Qualifiers="16384"]4412[/EventID] [Level]4[/Level] [Task]0[/Task] [Keywords]0x80000000000000[/Keywords] [TimeCreated SystemTime="2009-05-14T18:18:09.000Z" /] [EventRecordID]45692[/EventRecordID] [Channel]DFS Replication[/Channel] [Computer]servername.domain.com[/Computer] [Security /] [/System] [EventData] [Data]9046C3F4-843E-4A53-B941-4B20764072E5[/Data] [Data]D:\departments\Geomatics\Plan Quality\Data Processing\CG3533017 2009-05-13 KT FIXED[/Data] [Data]D:\departments[/Data] [Data]{26D5F604-E603-4F87-8EC3-DE9A945DA8FD}-v927199[/Data] [Data]Departments[/Data] [Data]swg.ca\files\departments[/Data] [Data]B8242CE2-F5EB-47DA-BA5B-1DD2F7EE3AB9[/Data] [Data]DFAA7A54-66CB-4C31-81A0-0F861382C32C[/Data] [Data]CG3533017 2009-05-13-{26D5F604-E603-4F87-8EC3-DE9A945DA8FD}-v927199[/Data] [/EventData] [/Event] I have tried using a ValueQuery for EventData, but it returns no data.

    Read the article

  • Chrome Lockups Windows 7 64-bit

    - by Mike Chess
    I'm running Google Chrome (6.0.427.0 dev) on Windows 7 Home Premium 64-bit (AMD Phenom 3.00 GHz, 8 GB RAM). The computer lockups hard after running Chrome for about five minutes. The lockup happens whether Chrome is actually being used to browse web sites or it is just idling. No programs can be started or interacted with when this happens. The computer must be power-cycled to recover. The lockup happens regardless of which web sites are being browsed. The system event logs do not show any events around the time when the lockup transpired. All other applications run just fine on this system. In fact, Chrome ran without issue for several months on this system (the system was brand new 03-2010). I also run the same version of Chrome on other computers (Windows XP SP3) without issue. I've come to really like Chrome and use it as my default browser whenever possible. What could be causing Chrome to cause the system to lockup as it does? Does Chrome have any logs that aren't part of the Windows event log? Does Chrome have a debug command line switch that might reveal more about what happens?

    Read the article

  • Setup shared internet connection on virtualbox with fixed IP

    - by Tom
    I am a web developer and until recently I have been using ubuntu as my OS. For many reasons, I have switched back to windows. I still want to keep my server on linux platform, so I setup my local server as a virtual machine. Everything works great, but i have a little struggle with the networking. Since I am working in different places and going around clients, I connect to all sorts of network with different settings. That means the possible IP range is very dynamic which causes issues when I work on my local server. At the moment I have a dynamic IP on my host and static IP on my guest. That way I can access the server from my host (by adding record to hosts file). I also have internet connection on the guest. But once i change networks, it does not work (assuming the network has different configuration). My question is, how to setup host-guest networking, so no matter what network I connect to, I can keep my static IP on guest, which is registered in hosts file on my host so I can access the webserver and also I will have internet connection on the guest? Hope it make sense. Thank you

    Read the article

< Previous Page | 561 562 563 564 565 566 567 568 569 570 571 572  | Next Page >