Search Results

Search found 9083 results on 364 pages for 'startup scripts'.

Page 288/364 | < Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >

  • How to display escaped characters in tmux status bar

    - by walrus
    i am running tmux from a tty on an embedded linux device. (NOT a terminal emulator) because the screen is rather small, i want to add some "icons" to the tmux status bar. to achieve this, i have simply created a font with the appropriate glyphs for things like battery, or wifi. i can load the font, and display the characters with calls that use an escape to the line drawing characters like so: echo -e "\xe\234\xf" \xe escapes me into line drawing character mode, \234 is my created character, and \xf returns me to normal character mode so my terminal doesnt start getting goofy. this works perfectly if i enter the command at the terminal whether tmux is started or not. the issue arises if i then try to use it in my ~/.tmux.conf file for the status bar. i currently have a line like this: set -g status-right "#(echo -e "\xe\234\xf") #(/script/to/output/powerlevel) this simply outputs \xe\234\xf powerlevel this goes the same if i try printf over echo. this is the output i would expect to get on the terminal if i made the call without passing -e to echo, or without enclosing the statement with quotes. i then decided to wrap the calls to the echo or printf in a shell script. again, the script works when called from the terminal, but not in tmux's status bar. now i get the unprintable character "?" instead of my icon, like this: ? powerlevel this is what i would expect if i did not use the line drawing escapes previously mentioned above, or if i tried to copy and paste the character as text using tmux. in addition, the calling of these character scripts screws up the rest of my status-right, as the clock has about 6 digits for minutes when it is called (though it correctly only updates two of them). how can i make tmux respect the escape characters? any help or insight is greatly appreciated.

    Read the article

  • Gathering buslogic SCSI hardware and virtual machine operating system

    - by Julian
    I'm trying to use Powershell to get SCSI hardware from several virtual servers and get the operating system of each specific server. I've managed to get the specific SCSI hardware that I want to find with my code, however I'm unable to figure out how to properly get the operating system of each of the servers. Also, I'm trying to send all the data that I find into a csv log file, however I'm unsure of how you can make a powershell script create multiple columns. Here is my code (almost works but something's wrong): $log = "C:\Users\me\Documents\Scripts\ScsiLog.csv" Get-VM | Foreach-Object { $vm = $_ Get-ScsiController -VM $vm | Where-Object { $_.Type -eq "VirtualBusLogic" } | Foreach-Object { get-VMGuest -VM $vm } | Foreach-Object{ Write-output $vm.Guest.VmName >> $log } } I don't receive any errors when I run this code however whenever I run it I'm only getting the name of the servers and not the OS. Also I'm not sure what I need to do to make the OS appear in a different column from the name of the server in the csv log that I'm creating. What do I need to change in my code to get the OS version of each virtual machine and output it in a different column in my csv log file? EDIT: Here's a more in depth look at things I've tried that have all failed: Get-VM | Foreach-Object { $vm = $_ $svm = Get-ScsiController -VM $vm | Where-Object { $_.Type -eq "VirtualBusLogic" } Foreach-Object {get-VMGuest -VM $svm } | Foreach-Object{Write-output $svm >> $log} } #Get-VM | Foreach-Object { # $vm = $_ # Get-ScsiController -VM $vm | Where-Object { $_.Type -eq "VirtualBusLogic"} #| write-host $vm # | Foreach-Object { # # #get-VMGuest -VM $_ | # #write-host $vm # #get-VMGuest -VM $vm } | Foreach-Object{ # #write-output $vm.VmName >> $log # #write-output $vm.guest.VmName, get-VmGuest -VM $vm >> $log NO GOOD # # Write-host $vm.Guest.VmName #+ get-vmGuest -vm $VM >> $log # # # } # } I'm not sure why get-VmGuest fails though. I'm getting the scsi hardware, filtering the hardware to only get buslogic, and then wanting to get the operating system of just the filtered VMs. I don't see where my code fails though.

    Read the article

  • Plone site randomly serving wrong content

    - by Chris Miller
    I have a Plone site that has begun to randomly serve up the wrong content. Any given content suddenly shows something else. Sometimes a JPEG loads a stylesheet instead or a stylesheet loads as a page or a page as an image. The images move around, some times our site logo shows a bullet, or one of the other site images. Fiddler shows the wrong content in the response, the apache logs show the content type of the incorrect file (so if the an image loads in place of a style sheet, apache shows that). We thought mod_proxy was the source of our grief, but we get the problem hitting Zope directly. I never get the wrong content using the Medusa Monitor to repeatedly hit the content. I do see ConflictErrors in the instance.log file, and they seem to be correlated to the problem, but not 100%. ZPublisher.Conflict ConflictError at \path\to\object: database conflict error (oid 0x3586, class BTrees._OIBTree.OIBTree, serial this txn started with blah, serial currently committed blah) (X conflicts (0 unresolved) since startup blah) I pulled that off the web, it's not from our logs, but it's the same message. This may be a red herring, it sounds like those messages are normal. We've updated to the 3.3.5, same problems. I'm at a loss. I'm wondering if there a good way to intercept what is being served? Secondly, is there a way to increase the verbosity of the access log to included the content-type? I've even seen the problem manifest in ZMI. It happens more often when we're authenticated. Sometimes it can take a thousand reloads to see the problem, other times it happens in different ways every time we reload. I believe we've seen this problem for a couple years, but it was very intermittent, a page would show the content of a GIF, then a reload later wouldn't happen for a long time. Now it's a huge problem.

    Read the article

  • Own server, multiple website: most secure PHP setup

    - by plua
    Hi there, We have a company server with a variety of websites. They are maintained by different people from within our company. All websites are public. The server access is limited to our company only. This is NOT a shared hosting environment. We are looking into securing the server, currently analyzing the risk related to permissions of files. We feel the highest risk is when files are uploaded and then opened/executed by the public. This should not happen, but an error in a script might allow people to do so (there are image uploaders, file uploaders, etc). Uploader scripts use PHP. So the question is: what is the best way of setting / organizing permissions of files and processes? There seem to be several options to run PHP (and Apache), and setting the permissions. What should we take into consideration? Any tips? We are considering mod_php and FastCGI, but perhaps given our situation other solutions are preferred?

    Read the article

  • Windows 2003 DC to Windows 2008 R2 DC with same name and same IP

    - by TheCleaner
    Environment = Windows 2003 native domain with 8 DCs I've got an old domain controller that is running 2003, CA Enterprise role, DHCP, DNS, a few GPO scripts that point to shares on it, and some other minor functions. All our servers point to it as their primary DNS, and there's lots of references to its IP or name throughout the domain at this point (8+ years later). I really don't feel like manually changing all of this, it would be a pretty massive undertaking. I want to follow this guide: http://msmvps.com/blogs/acefekay/archive/2010/10/09/remove-an-old-dc-and-introduce-a-new-dc-with-the-same-name-and-ip-address.aspx to hopefully end up with basically an "in-place upgrade" so to say. I considered just doing a P2V of the box, but we don't really want to keep it around running 2003 to be honest. I also considered using a CNAME and adding a 2nd IP (the old one) but again, it seemed like it would be cleaner using the attached link. My actual question: Any gotchas or big caution signs when doing what the link suggests? Anyone gone down this road and have advice on how to proceed?

    Read the article

  • Metacity/Compiz not staring upon Login Ubuntu 10.10

    - by Ryan Lanciaux
    TLDR: As of this afternoon, I do not have a window manager when I login to Ubuntu 10.10. I would like to have window manager on login without needing to add to startup. Just started using linux again as my home OS. (Used it for a long time years ago but been on windows up until this past weekend) so this may be kind of n00b-ish :) Anyways, up until today, everything on my machine was running okay. I did not have compiz running as the default wm because I'm running NVidia Drivers and Xinerama (and as I understand Xinerama & Compiz don't work well together). I made no changes to my xorg / etc but today when I logged in, I had to manually start metacity from command line to get any window manager. Really not sure what would be causing this or what I can do to get it working again. My xorg.conf is available here: https://gist.github.com/845618. My default Window Manager is set to /usr/bin/metacity in Configuration Editor under /desktop/gnome/applications/window_manager. p.s. Any tips on how to run 3 monitors where I can move windows between screens without Xinerama would be appreciated but that's prolly for another thread :)

    Read the article

  • OS X superuser folders automatically created. Perusers launchd process appears to kill 501

    - by Ric Pen
    New Apple laptop OSX 10.8.2. I have used OS X but many years previously, and am not familiar with subtleties or changes in com.apple.launchd.peruser.x... I have previously (and in retrospect, foolishly) made changes to these rapidly spawned new peruser accounts (my initial reaction was that if ipfw was disabled, then I might well be under hacker attack, which I have dealt with, years ago), but I believe I was wrong, and the results of my efforts at preserving the system's integrity have in fact been destructive, overreactive, and have resulted in much work to restore. My understanding from other posts is that superuser protocols have changed quite dramatically since I bought the first developer version of OS X many years ago. Haven't developed on Apple much since then, w/ exception of WebObjects (IMO, much underrated at that time, and was more user friendly than ASP (prior to .NET, I vaguely recall). Creation of apparently nasty peruser folders appear to confound 501 process, which logs inability to find firewall (ipfw). Can someone help me with this? I am concerned that either the system is improperly configured, an application was improperly installed (although there is little here beyond Apple's SDK, which I find quite accommodating and intuitive). Still, I am a novice, only sporadically develop at this time, and would really just like to see this system running happily. Please offer assistance, in the form of potential info sources, or if you have had a similar experience, then perhaps scripts to suss out this issue. I do not wish to damage the system, but Apple's Developer connection and discussion threads do not appear to have dealt with this particular issue recently... Although I may well have missed something you have not - please apprise. Any assistance on this issue is very much appreciated - by an old guy, who wants to do some things which were fun about 20 years ago.

    Read the article

  • Is it possible to open a sqlite database from within microsoft sql management studio?

    - by Brian T Hannan
    Is there a way to open a .db file (sqlite database file) from within microsoft sql management studio? Right now we have a process that will grab the data from a microsoft sql server database and put it into a sqlite database file that will be used by an application later on. Is there a way to open the sqlite database file so that it can be compared to the data inside the sql server database ... using only one sql query? Is there a plug-in for microsoft sql management studio? Or maybe there is another way to do this same task using only one query. Right now we have to write two scripts ... one for sql server database and one for sqlite database ... then take the output from each in the same format and put them each in their own OpenOffice spreadsheet file. Finally, we compare the two files to see if there are any differences. Perhaps there's a better way to do this. P.S. Alot of applications use sqlite internally: Well-Known Users Of SQLite

    Read the article

  • Desktop Provisioning for a Small Linux Software Development Team

    - by deakblue
    Goal: Get a small team using a standard development image rather than 4 software devs setting up their own environments. Why: it takes a day or days to install a distro, build-specific libraries, tools like editors and IDEs, mysql, couchdb, java, maven, python, android-sdk, etc. It's a giant PITA that when repeated 4 times by 4 developers (not sys admins) wastes time and generates annoying divergences that crop up later (it-builds-on-my-box syndrome). There's no sharing of productivity, settings, tricks, scripts, set-ups. Some of this is helped by segregating the build systems into headless virtualbox images. This doesn't really address tooling though or the GUI-desktop dev that needs doing. So I see three basic strategies, ghosting, virtualization, and finally creating a kind of in-house linux distro (I guess Google does something like this). The target dev environment is based on Debian OpenBox and must allow a mix of 3rd gen Core i7 notebooks 8GB-minimum to work both single and multihead. Important, the lappies are not the same, but a mix of 2012 macbooks and PCs. So: virtualization: is doing all of your work within a VM, like VirtualBox, practical on this hardware or annoying. ghosting: will laptops from different manufacturers make this impractical. DIY distro: short of scripting a bunch of package installs, I don't know if there's any "distro-maker" that could keep this from being an epic project of scripting package installs. So any advice?

    Read the article

  • Windows recovery partition with GRUB2

    - by Actorclavilis
    So I recently got a new Toshiba laptop and installed Ubuntu 12.04 on it. Since it is a "Windows 7 Enabled" machine or some other proprietary nonsense like that, a few hardware features are designed only to work with W7. Eventually I found a way to enable these hardware functions by booting into the W7 recovery disc; however, they sporadically stop working. I'm moderately surprised that I was able to get anything to work at all, so I don't especially want to spend more time fixing the problems in a different fashion. Now I don't actually own the recovery disc; it's my father's. Since it's a pain to have to go asking for the disc every time the features stop working, I made an image of the disc and was hoping to make a 'recovery' partition like some computers have. However, unetbootin and GRUB2 both want a kernel and initrd to point to on startup, and something like set root=(hd0,1) loopback lo /w7r.iso set root=(lo) chainloader +1 in the spirit of the makeactive/ chainloader +1 commands that I used to use to dual-boot Linux and Windows simply gives me a file-not-found error. My question, therefore, is: Is it possible to, having written a Windows iso to a partition (such as with dd if=w7r.iso of=/dev/sda4) to a partition, convince GRUB2 to boot from it? Thanks in advance.

    Read the article

  • Hyper-V vss-writer not making current copies

    - by Martinnj
    I'm using diskshadow to backup live Hyper-V machines on a Windows 2008 server. The backup consists of 3 scripts, the first will create the shadow copies and expose them, the second uses robocopy to copy them to a remote location and the third unexposes the shadow copies again. The first script – the one that runs correctly but fails to do what it's supposed to: # DiskShadow script file to backup VM from a Hyper-V host # First, delete any shadow copies of the drives. System Drives needs to be included. Delete Shadows volume C: Delete Shadows volume D: Delete Shadows volume E: #Ensure that shadow copies will persist after DiskShadow has run set context persistent # make sure the path already exists set verbose on begin backup add volume D: alias VirtualDisk add volume C: alias SystemDrive # verify the "Microsoft Hyper-V VSS Writer" writer will be included in the snapshot # NOTE: The writer GUID is exclusive for this install/machine, must be changed on other machines! writer verify {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} create end backup # Backup is exposed as drive X: make sure your drive letter X is not in use Expose %VirtualDisk% X: Exit The next is just a robocopy and then an unexpose. Now, when I run the above script, I get no errors from it, except that the "BITS" writer has been excluded because none of its components are included. That's okay because I really only need the Hyper-V writer. Also I double checked the GUID for the writer, it's correct. During the time when the Hyper-V writer becomes active, 2 things will happen on the guest machines: The Debian/Linux machine will go to a saved state and restore when done, all fine. The Windows guests will "creating vss snapshop-sets" or something similar. Then X: gets exposed and I can copy the .vhd files over. The problem is, for some reason, the VHD files I get over seems to be old copies, they miss files, users and updates that are on the actual machines. I also tried putting the machines in a saved sate manually, didn't change the outcome. I hope someone here has an idea of how to solve this.

    Read the article

  • configuring lighttpd for large downloads

    - by ahmedre
    i run a web site that hosts pages that are just general scripts (php, etc) and mp3 downloads (some of which are fairly large - up to 200mb). i am running lighttpd on the servers on linux (ubuntu 64). everything is fine, but under high load, the server is not accessible (or very slow - even sshing in takes a while), and i am guessing this is due to a huge number of mp3 downloads at that time. consequently, dns sees the server as down and redirects all the traffic to the other servers, and after a while, it comes back up and things work again. so what's the best way to fix this? ideally, i want the server to continue running (and the web pages - php etc - to always work, but downloads don't always have to work). should i just have 2 web servers running (one for the downloads and one for the php pages), or is it perhaps something i can fix in my lighttpd configuration? here are the snippets from my configuration: server.max-worker = 4 server.max-fds = 2048 server.max-keep-alive-requests = 4 server.max-keep-alive-idle = 4 server.stat-cache-engine = "fam" fastcgi.server = ( ".php" => (( "bin-path" => "/usr/bin/php-cgi", "socket" => "/tmp/php.socket", "max-procs" => 1, "idle-timeout" => 20, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "64", "PHP_FCGI_MAX_REQUESTS" => "1000" ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" )) ) # normal php site $HTTP["host"] =~ "bar.com" { server.document-root = "/usr/local/www/sites/bar.com/" accesslog.filename = "|/usr/sbin/cronolog /var/log/lighttpd/%m/%d/%H/bar.log" } # download site $HTTP["host"] =~ "(download|stream).foo.com" { server.document-root = "/home/audio/" dir-listing.activate = "enable" dir-listing.hide-dotfiles = "enable" evasive.max-conns-per-ip = 1 evasive.silent = "enable" # connection.kbytes-per-second = 256 accesslog.filename = "|/usr/sbin/cronolog /var/log/lighttpd/%m/%d/%H/download.log" }

    Read the article

  • What could cause an 101 Error in WAMP under Windows 7 ?

    - by Brayn
    Hey, I'be been using WAMP for local development for quite a while now but lately I've been getting an Error 101 message when I browse localhost sites. It's possible for this to have appeared after the last WAMP update but I'm not 100% sure on this. If I try again and again, after several page refreshes it works but it's really annoying! The exact error message is: Error 101 (net::ERR_CONNECTION_RESET): Unknown error. This is my configuration: OS: Windows 7 Apache: 2.2.11 PHP: 5.2.9-2 WAMP: 2.0 Also the local scripts connect to a remote MySQL server, they don't use the local MySQL(I don't know if it matters, just though I let you know). I've been looking into the apache logs and I've found the following. It seems that the apache server keeps restarting and I can't figure why: [Wed Oct 14 13:52:30 2009] [notice] Parent: child process exited with status 255 -- Restarting. [Wed Oct 14 13:52:30 2009] [notice] Apache/2.2.11 (Win32) PHP/5.2.9-2 configured -- resuming normal operations [Wed Oct 14 13:52:30 2009] [notice] Server built: Dec 10 2008 00:10:06 [Wed Oct 14 13:52:30 2009] [notice] Parent: Created child process 6784 [Wed Oct 14 13:52:31 2009] [notice] Child 6784: Child process is running [Wed Oct 14 13:52:31 2009] [notice] Child 6784: Acquired the start mutex. [Wed Oct 14 13:52:31 2009] [notice] Child 6784: Starting 64 worker threads. [Wed Oct 14 13:52:31 2009] [notice] Child 6784: Starting thread to listen on port 80. [Wed Oct 14 13:52:32 2009] [notice] Parent: child process exited with status 255 -- Restarting. [Wed Oct 14 13:52:33 2009] [notice] Apache/2.2.11 (Win32) PHP/5.2.9-2 configured -- resuming normal operations [Wed Oct 14 13:52:33 2009] [notice] Server built: Dec 10 2008 00:10:06 [Wed Oct 14 13:52:33 2009] [notice] Parent: Created child process 3572 [Wed Oct 14 13:52:33 2009] [notice] Child 3572: Child process is running [Wed Oct 14 13:52:33 2009] [notice] Child 3572: Acquired the start mutex. [Wed Oct 14 13:52:33 2009] [notice] Child 3572: Starting 64 worker threads. [Wed Oct 14 13:52:33 2009] [notice] Child 3572: Starting thread to listen on port 80. Also I've checked Windows Firewall and disabled any other protection that I have on this computer with no improvement. Thanks!

    Read the article

  • Java Deployment and Configuration (1.6.0_21)

    - by user125137
    Sofware: Java Runtime Environment 1.6.0_21 OS: Windows XP Professional 32-Bit, SP3 Situation: a new piece of web based software is being deployed this week and prior to this all the company desktops need to be set up to meet the requirements of this software. One of these requirements is JRE 1.6.0_21. I have successfully scripted the removal of all other Java versions and the installation of the required version, however I cannot get it configured properly. One of the requirements is that the Java console be set to disabled - if it is not it can cause an issue with a particular function. I have pushed out a deployment.config and deployment.properties but the console just will not disable itself.. I know the config is being read correctly because the update tab is being correctly disabled and removed. deployment.config: deployment.system.config=file\:C\:/WINDOWS/Sun/Java/Deployment/deployment.properties deployment.system.config.mandatory=true deployment.properties: #deployment.properties #Fri Jun 15 09:34:31 EST 2012 deployment.version=6.0 deployment.console.startup.mode=DISABLE deployment.javaws.autodownload=NEVER deployment.javaws.autodownload.locked= There is no change if I set the console to ENABLE either - it remains on the default of hidden. I'm sure I can disable the console with a registry change of some form but my preference is to have it done via the deployment files as it gives the option of centralising the properties file to a network share if we wish. If anyone has any suggestions it would be appreciated.

    Read the article

  • Transfering Files to server IP and port

    - by Mason
    I need to transfer files from my local computer on windows 7 to a server running linux. I access the server with putty through ssh at a specific IPv4 address and port number. I've attempted using the pscp command from my local computer but was denied access by the server. "Fatal: Network error: Connection refused" c:>pscp test.csv userid@**IPv4_Addres***:Port# /path/destination_file_name. Either the server blocks all pscp attempts from unauthorized users (most likely my laptop included) or I used the command incorrectly. If you have experience using this command, where exactly will the file get transfered to, I'm assuming that the path destination starts at my home directory in the server. Also if you have any other alternative methods of transfering the files let me know. Update 1 I have also tried using WinSCP however I got permission denied for that as well, it looks like the server will not let me upload or save files. Solved I had a complete lapse of memory and forgot about sudo (spent too much time with scripts the last 2 months), so I was able to change the permissions to allow external editing. Thanks for all the help guys!

    Read the article

  • Attaching 3.5" desktop drive to MacBook SATA

    - by Kyle Cronin
    I have a mid-2007 MacBook that, according to the Apple Store, has suffered some liquid damage and requires a new logic board to operate correctly, a ~$750 repair I've been told (would normally be around ~$300 were it not for the "liquid damage"). The unit itself works fine - the only problem I've been having is that the system does not recognize the battery and will not charge it. Curiously, the system can still be powered by the battery and even recognizes when the power cord is detached by diming the backlight, but I digress. Now that this laptop will likely become a desktop, I'm wondering if it might be possible to attach a desktop drive. I recently purchased a 2TB SATA drive and I'm wondering if it's possible to somehow attach it where the current internal drive connects. Obviously the drive itself will not fit inside the device, but as the unit will spend the rest of its days on my desk, that's not really much of an issue. My main questions are: Is this possible? If so, how would I connect the drive? Would a SATA extender cable work? Is the SATA port on my MacBook capable of powering a desktop drive? Or should I just get a SATA male-to-female cable and see if I can power the drive through other means (a cheap power supply, for example) The disk I'm referring to is the Hitachi Deskstar HD32000. Though I couldn't find that exact model on Hitachi's support site, these are the power requirements for a similar drive, the 7K2000 (2TB, 7200RPM, SATA II): Power Requirement +5 VDC (+/-5%) +12 VDC (+/-10%) Startup current (A, max.) 1.2 (+5V), 2.0 (+12V) Idle (W) 7.5 From what I've read, 2.5" drives require 5V, meaning that my MacBook obviously is capable of producing it. The specs seem to suggest that this drive seems capable of accepting it instead of the typical 12V - is this an accurate interpretation of the power requirements? Or does it need both 12V and 5V?

    Read the article

  • squid bypass for a domain

    - by krisdigitx
    i am using squid with adzap, it possible that squid/adzap does not cache for a particluar domain eg. cnn.com this is my squid.conf file # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 #acl localhost src ::1/128 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 #acl to_localhost dst ::1/128 # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed acl localnet src 192.168.1.0/24 acl localnet src 192.168.2.0/24 acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT # # Recommended minimum Access Permission configuration: # # Only allow cachemgr access from localhost http_access allow manager localhost http_access deny manager # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # We strongly recommend the following be uncommented to protect innocent # web applications running on the proxy server who think the only # one who can access services on "localhost" is a local user #http_access deny to_localhost # # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS # # Example rule allowing access from your local networks. # Adapt localnet in the ACL section to list your (internal) IP networks # from where browsing should be allowed http_access allow localnet http_access allow localhost # And finally deny all other access to this proxy http_access deny all # Squid normally listens to port 3128 http_port xxx.xxx.xxx.yyy:3128 transparent visible_hostname proxyserver.local # We recommend you to use at least the following line. hierarchy_stoplist cgi-bin ? # Uncomment and adjust the following to add a disk cache directory. cache_dir ufs /var/spool/squid 1024 16 256 # Leave coredumps in the first cache dir coredump_dir /var/spool/squid # Add any of your own refresh_pattern entries above these. refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 access_log /var/log/squid/squid.log squid access_log syslog squid redirect_program /usr/local/adzap/scripts/wrapzap fixed using acl allow_domains dstdomain www.cnn.com always_direct allow allow_domains

    Read the article

  • which vista services can be disabled with impunity?

    - by GwenKillerby
    I use Vista on a HP pavilion DV2 laptop. When I look through all the services my laptop starts, it really seems there's way too much of it. I multi boot with XP and 7. Both startup in 40 seconds. Vista takes four minutes. Is there some software that can determine which services I don't need? On 7, there's no propietary HP stuff at all, yet it seems to run fine. Because all these service, there's a LOT of them and some just sit there doing nothing, monitoring for updates I don't really need or want or need to know about the second they're available. my laptop is the only computer i use at home, there's no home network, aside from the modem-router, which is cabled, not wifi. Take for instance Parental controls, and stuff for people with bad eyesight, Tablet PC. I really never use any of that stuff. Hope this question is specific enough. I've looked at the other questions but they didn't answer me. thanks, Gwen.

    Read the article

  • Fedora 15: em1 recently dissapeared and hostapd no longer serves internet to wirelessly connected devices

    - by Daniel K
    I have a laptop running hostapd, phpd, and mysql. This laptop uses an Ethernet connection to connect to the internet and acts as a wireless access point for my workplace's wifi devices. After installing some software and reconnecting my Ethernet elsewhere, my "em1" device is no longer present and wirelessly connected devices can no longer reach the internet. The software I recently installed is: pptp, pptpd, and updated some fedora libraries. I have also recently moved my desk and laptop to another location and thus had to reconnect the Ethernet elsewhere. Wifi devices no longer have access to the internet. Wirelessly connected devices are able to successfully log into the laptop, showing full strength, correct SSID, and uses the proper password. However, when I tried to connect to a site like google, the request times out. The device "em1" also no longer appears on my machine. Running: # ifup em1 will give me the following output: ERROR : [/etc/sysconfig/network-scripts/ifup-eth] Device em1 does not seem to be present, delaying initialization. And running: # dhclient em1 has the following output: Cannot find device "em1" When I run # dmesg|grep renamed, I get the following: renamed network interface eth0 to p4p1. I've tried to connect to the internet through p4p1 directly from the laptop and was successful. However, my wireless devices connected to my laptop are not able to connect to the internet. I have uninstalled pptp and pptpd using # yum erase ... but the problem still persists. To install pptp I used: # yum install pptp To install pptpd I did the following: # rpm -Uvh http://poptop.sourceforge.net/yum/stable/fc15/pptp-release-current.noarch.rpm # yum install pptpd To update my fedora libraries I used: # yum check-update # yum update EDIT: Running # route produces the following results: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 10.11.200.1 0.0.0.0 UG 0 0 0 p4p1 10.11.200.0 * 255.255.252.0 U 0 0 0 p4p1 172.16.100.0 * 255.255.255.0 U 0 0 0 wlan0

    Read the article

  • Hibernate between OS X and Bootcamp Win 7

    - by Willem
    Wouldn't it be great if someone wrote a guide or an app which allowed you to switch instantly between OS X and Windows using Hibernate in both OS:s? Windows 7 already has an option "Hibernate" which allows you to boot back to your OS X partition, but OS X does not exactly offer the same. However, there are possibilities here. It seems that the recent Mac's have 3 different kinds of sleeping mode: Sleep: Low power consumption, RAM still active. Legacy Safe Sleep: No power consumption(?), writes RAM to disk and shuts down (is this the same as Hibernate?) Safe Sleep: Writes RAM to disk and enters sleep mode. If battery level drops too low it goes into Hibernate (is this Hibernate the same as #2 in this list? This is the Hibernate I will be referring to int he rest of this post) It seems that I am unable to force my MacBook Pro (Late 2011) OS X 10.7.3 into a true hibernate using either command line or apps that are supposed to do this. I believe the Mac should show that white loading bar whilst waking up if it was truly put into hibernate (which it does not). But I can get this white bar to show by letting my battery level drop to 0% so there is obviously a system function for it (obviously, duh! :). When Win 7 goes into hibernate it shuts down completely and you can then boot into OS X on startup. On OS X however, hibernate forces you to wake up into OS X. Can you hack this so that you're allowed to select boot partition after OS X hibernates? Would it be possible to use the true hibernate system functionalities of Win 7 and OS X to create a kind of instant switching between the two? Imagine this on a quick SATA-3 SSD like my 180GB Intel 520. Thanks / Willem

    Read the article

  • PHP files are downloaded, not executed in UserDir on Apache

    - by Fabian
    We're running a webserver using Debian 6.0.3 with Apache 2, we recently upgraded from Debian 5 to 6. Since then php scripts in the user directories (using mod_userdir) have stopped working, they are downloaded instead of being executed. There is also a website using php outside of the user directories, and that one continues to work fine, so PHP seems to generally work on the server. I tested it with several PHP files, among the a simple phpinfo file that works fine on the main site, but is just downloaded when copying it to one of the user directories. The php files and the directory containing them are executable for everyone. The option in the Apache php5.conf that by default disables PHP in the user directories, is commented out, so the php5.ini looks like this: <IfModule mod_php5.c> <FilesMatch "\.ph(p3?|tml)$"> SetHandler application/x-httpd-php </FilesMatch> <FilesMatch "\.phps$"> SetHandler application/x-httpd-php-source </FilesMatch> # To re-enable php in user directories comment the following lines # (from <IfModule ...> to </IfModule>.) Do NOT set it to On as it # prevents .htaccess files from disabling it. #<IfModule mod_userdir.c> # <Directory /home/*/public_html> # php_admin_value engine Off # </Directory> #</IfModule> </IfModule> We restarted Apache after changing this. I'm running out of ideas now what the problem could be, and I don't know how I could really determine which problem is preventing those php files from being executed. Any ideas on how I can solve this? Update: Strangely, PHP seems to work fine in subfolders of user directories, so if I copy a PHP file from /home/user/public_html/ to /home/user/public_html/test/ it suddenly works.

    Read the article

  • Windows7 corrupted profile - prevention exists?

    - by Radek
    I have dedicated Windows7 (not on domain) virtual machine for overnight automation testing. Some commands (mySQLdump, tscon.exe) must be run under administrator account. Last week administrator account's profile was corrupted. I fixed it by renaming it in the registry and logging in as administrator. And today it is corrupted again. I use administrator account only to run above commands via runas. Also the computer is restarted via cmd - shutdown command - quite often. Especially every night before automation testing starts. I checked the comp for viruses - did full scan using avast although I believed that the comp is clean. Any idea how to prevent the profile to get corrupted again? update So the first log entry in event log is today from 1.15am and one of my scripts ran runas command as administrator exactly at 1.15am. It was second time that runas war executed though after the testing started. The same happened second day in a row. Before the testing starts I need to copy one file that is locked. So I run handle.exe from runas to unlock it. That is what I think causing the profile to get corrupted. I am not able to reproduce it by myself. The message from event viewer is Windows cannot load the locally stored profile. Possible causes of this error include insufficient security rights or a corrupt local profile. DETAIL – The process cannot access the file because it is being used by another process.

    Read the article

  • EventID 1058 Code 5, Sysvol is subdir of Sysvol - how to fix?

    - by nulliusinverba
    I have been trying to resolve this error, like many others: The processing of Group Policy failed. Windows attempted to read the file \domain.local\SysVol\domain.local\Policies{3EF90CE1-6908-44EC-A750-F0BA70548600}\gpt.ini from a domain controller and was not successful. Group Policy settings may not be applied until this event is resolved. This issue may be transient and could be caused by one or more of the following: a) Name Resolution/Network Connectivity to the current domain controller. b) File Replication Service Latency (a file created on another domain controller has not replicated to the current domain controller). c) The Distributed File System (DFS) client has been disabled. Error code: 5 = Access Denied. The incredibly helpful post is this one (http://www.experts-exchange.com/OS/Microsoft_Operating_Systems/Server/2003_Server/A_1073-Diagnosing-and-repairing-Events-1030-and-1058.html). Quoting from this post: HERE IS A LIST OF POTENTIAL PROBLEMS THAT CAN LEAD TO 1030 AND 1058 EVENT ERRORS: --Sometimes the permissions of the file folders that contain Group policies (the Sysvol folder) can be corrupted. --Sometimes you have problems with NetBIOS: --Sometimes the GPO itself is corrupt, or you have a partial set of data for that GPO. --Sometimes you may have problems with File Replication Services, which almost always indicates a problem with DNS --Sysvol may be a subfolder of itself: Sysvol/Sysvol I have the problem listed where sysvol is a subfolder of sysvol. The directory structure is: -sysvol -domain -staging -staging areas -sysvol (shared as "\\server\sysvol") -domain.local -ClientAgent -Policies -scripts Interestingly, the second sysvol folder is the one that is shared as "\server\sysvol". This makes me confident this is the issue with the permissions and error code 5. Also interestingly, my server 2008 R2 servers can see it fine - my server 2008 servers cannot, and get the error. This is consistent across all my servers. This latter fact makes me uncertain what I need to do to fix this up. Do I, e.g., simply move the shared sysvol folder up a level to replace the non-shared one? Any help greatly appreciated. Cheers, Tim.

    Read the article

  • Delay init from starting a service for a period of time?

    - by Matthew
    I am trying to get a rudimentary NFS server up and running. Right now the server is configured as an NFS server due to a workaround for a vendor issue not supporting direct attached clustered storage, which we are trying to get them to resolve. The vendor software is Splunk. The splunk feature we are using requires files be located on shared storage (which for us is /mnt/nfs until they support a real clustered filesystem). Currently the server has a GFS2 filesystem mounted at bootup (it is the only server with the filesystem actively mounted so there should be no problems with locking). We went with GFS2 so switching over to a clustered filesystem is easy should the vendor begin supporting it. NFS is configured to mount that filesystem at /mnt/nfs, which the splunk installation than sees. Splunk is configured to find it's configuration files in /mnt/nfs. However, I am running into a problem where the splunk daemon starts before nfs is finished loading, and because it sees nothing at /mnt/nfs it starts creating files there, and then when the files disappear (nfs finishes mounting the share), splunk craps out. Splunk is set to run at runlevel 3, S90. NFS is set at runlevels 2-5, S60. Is there any way to delay the startup of the splunk process further?

    Read the article

  • Deleted printers keeps coming back - and multiply

    - by MojoDK
    My users are on 2012 R2 RDS Session Host servers. I've used "Deploy Printers" (from Print Manager) to deploy 4 printers. The last week, I've had a lot of problems where users can't print. If I deleted the printer and added it again, they could print just fine. Now I've removed all printer deploying from GPO - and I have no printers in any login scripts. I did a gpupdate /force, but all the 4 printers are now listed 3 times... If I delete the printers and log off and back on, all the printers are popping up again. Sigh! This is driving me nuts. This script doesn't show any of the "SVFREJA" printers... Set objWMIService = GetObject("winmgmts:\\.\root\cimv2") Set colPrinters = objWMIService.ExecQuery ("Select * From Win32_Printer") If colPrinters.Count <> 0 Then 'If there are some network printers Dim s s = "" For Each objPrinterInstalled In colPrinters ' For each network printer s = s + objPrinterInstalled.Name + chr(13) Next msgbox s End if It gives me this result... (sry for the big picture) My problem is not with the "redirected" printers, my problem is that I have several printers with the same name (on SVFREJA) and I can't get rid of them. Any idea why I can't get rid of the "ophaned" printers??

    Read the article

< Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >