Search Results

Search found 22900 results on 916 pages for 'pascal case'.

Page 645/916 | < Previous Page | 641 642 643 644 645 646 647 648 649 650 651 652  | Next Page >

  • Unix Permissions: Enable access to files no matter the user?

    - by TK Kocheran
    I've been using Linux for a long time and I still am completely in the dark about how file permissions really work. With that in mind, does anyone have any books or thorough guides I could read to really understand things completely? I've done my fair share of sysadminning, so I know the easy stuff like making directories readable and writable, making files executable, and changing the owner of a file, but on sharing files across users, I'm lost. Here's my main problem. I have a number of machines across which I intend to synchronize my music library. I've been using Unison for a while now and it's a great choice as I can easily run it over SSH on my local network which I just set up. Win-win. Up until this point, I've been synchronizing computers using a 2TB external hard drive. (computer 1 unisons to HD, computer 2 unisons to HD, etc.) This is tedious at best, especially since I encrypted the drive, making it a huge hassle to hook it up to all of my machines and sync it. Anyway, the drive is running ext4 (in TrueCrypt), so it maintains all Unix filesystem info like owners and groups. I just set up a new machine and just Unison'd it to get the music on it, and I realized that now, all of my permissions are fubar. I had to run Unison as root since that was the only way I could get the files to come off of the external drive. Apparently, since I'm using a different user name on this machine than my usual "rfkrocktk" across all machines, this essentially throws a huge wrench in the gears. Here's my use case. This laptop has two effective users, "leandra" and "rfkrocktk". I want to share music between these two users, so I symlinked /home/rfkrocktk/Music to point to /home/leandra/Music. How do I (a) allow both users access to read/write/delete files in this folder, and (b) keep everything nicely in sync without messing up file ownership?

    Read the article

  • grepping a substring from a grep result

    - by allentown
    Given a log file, I will usually do something like this: grep 'marker-1234' filter_log What is the difference in using '' or "" or nothing in the pattern? The above grep command will yield many thousands of lines; what I desire. Within those lines, There is usually one chunk of data I am after. Sometimes, I use awk to print out the fields I am after. In this case, the log format changes, I can't rely on position exclusively, not to mention, the actual logged data can push position forward. To make this understandable, lets say the log line contained an IP address, and that was all I was after, so I can later pipe it to sort and unique and get some tally counts. An example may be: 2010-04-08 some logged data, indetermineate chars - [marker-1234] (123.123.123.123) from: [email protected] to [email protected] [stat-xyz9876] The first grep command will give me many thousands of lines like the above, from there, I want to pipe it to something, probably sed, which can pull out a pattern within, and print only the pattern. For this example, using an the IP address would suffice. I tried. Is sed not able to understand [0-9]{1,3}. as a pattern? I had to [0-9][0-9][0-9]. which yielded strange results until the entire pattern created. This is not specific to an IP address, the pattern will change, but I can use that as a learning template. Thank you all.

    Read the article

  • Volume licenced copy of MS Office 2007 shows "Non Commercial Use" in title bar

    - by Linker3000
    I have just removed the demo copy of Office 2007 preinstalled on a new laptop and replaced it with an install of the full professional edition downloaded from the MS Volume Licensing site and installed one of our volume licence keys, yet the apps (Word etc.) show "Non Commercial Use" in the title bar, which is what usually happens in the Home and Student edition. I have tried: Deleting the Office registration keys in the registry and using one of our other Office 2007 volume licence keys (we have 7) when prompted to re-register Uninstalling Office completely and reinstalling it from a newly-downloaded ISO burned to CD and also from a compressed file that installs from hard disk/USB stick (both from Microsoft - no dodgy stuff) Yet the non-commercial message persists. Although it's a cosmetic issue, the laptop is going to be used for customer presentations and so the sales person is rightly concerned about the image this portrays. I presume there may be something floating around the registry or in a file somewhere but I can't find it. Articles I have found elsewhere just refer to the message being related to the use of a Home and Student licence key, which is 100% not the case. Any thoughts? Thanks.

    Read the article

  • Haproxy not properly passing on X-Forwarded-For header

    - by JesseP
    I have backend web servers that receive requests by way of haproxy-nginx-fastcgi. The web app used to see multiple ip's coming through in the X-Forwarded-For header, chained together with commas (most original IP on the left). At some point in the recent past (just noticed, so not sure what caused it) something changed, and now I'm only seeing a single IP passed in the header to my web application. I've tried with haproxy 1.4.21 and 1.4.22 (recent upgrade) with the same behavior. Haproxy has the forwardfor header set: option forwardfor Nginx fastcgi_params config defines this header to be passed to the app: fastcgi_param HTTP_X_FORWARDED_FOR $http_x_forwarded_for; Anyone have any ideas on what might be going wrong here? EDIT: I just started logging the $http_x_forwarded_for variable in nginx logs, and nginx is only ever seeing a single IP, which shouldn't ever be the case, as we should always see our haproxy ip added in there, right? So, issue must either be in nginx handling of the variable coming in, or haproxy not building it properly. I'll keep digging... EDIT #2: I enabled request and response header logging in HAProxy, and it is not spitting anything out for X-Forwarded-For, which seems very odd: Oct 10 10:49:01 newark-lb1 haproxy[19989]: 66.87.95.74:47497 [10/Oct/2012:10:49:01.467] http service/newark2 0/0/0/16/40 301 574 - - ---- 4/4/3/0/0 0/0 {} {} "GET /2zi HTTP/1.1" O Here are the options i set for this in my frontend: mode http option httplog capture request header X-Forwarded-For len 25 capture response header X-Forwarded-For len 25 option httpclose option forwardfor EDIT #3: It really seems like haproxy is munging the header and just passing on a single one to the backend. This is fairly impacting to our production service, so if anyone has an ideas it would be greatly appreciated. I'm stumped... :(

    Read the article

  • Where in the stack is Software Restriction Policies implemented?

    - by Knox
    I am a big fan of Software Restriction Policies for Microsoft Windows and was recently updating our settings for this. I became curious as to where Microsoft implemented this technology in the stack. I can imagine a very naive implementation being in Windows Explorer where when you double click on an exe or other blocked file type, that Explorer would check against the policy. I call this naive because obviously this wouldn't protect against someone typing something in a CMD window. Or worse, Adobe Reader running an external application. On the other hand, I can imagine that software restriction policies could be implemented deep in the stack almost at the metal. In this case, the low level loader would load into memory the questionable file, but mark the memory in the memory manager as non-executable data. I'm pretty sure that Microsoft did not do the most naive implementation, because if I block Java using a path block, Internet Explorer will crash if it attempts to load Java. Which is what I want. But I'm not sure how deep in the stack it's implemented and any insight would be appreciated.

    Read the article

  • NX Client for Windows 7 Opens Remote Desktop in Multiple Windows

    - by Corey Kennedy
    What I'm trying to do: access my Ubuntu desktop remotely via NX Client on my Windows 7 laptop. My environment: server: Ubuntu 10.10 on AMD 1Ghz/512MB RAM PC client: Windows 7 on ThinkPad sl510 Software: server is running NXServer 3.4.0. Using xfce4 window manager. Laptop is using NXClient for Windows In my NX Client "Desktop" settings I've selected "Unix" and "Custom" for OS and environment. I've also specified "startxfce4" as the application to launch when NX connects. I am able to authenticate an NX session on my laptop. By this I mean, I can start the client on my laptop, enter credentials for my Linux user, and NX establishes a connection to the server and attempts to open a remote desktop window. The problem, though, is that this remote desktop is "fragmented" into many Windows. One window will display the bulk of my desktop (complete with desktop icons for "Home," "File System," and "Trash") while another window will contain the taskbar, and another window will contain the application strip. I can select each of these Windows individually, but I cannot click on any objects within them. I've searched Super User, Ubuntu Forums, NX help, Server Fault, and tried many Google searches - none have turned up another case of this particular problem. I'm stumped. Does anyone have any suggestions for what I might try? I'm guessing the problem has to do with my xfce config files, but I've only just setup this server - it's been a long time since I've used Linux and there's a lot I just don't know. What I am NOT trying to do: use Desktop sharing from Ubuntu, whereby I VNC into a desktop that I've already established on the server. I am trying to configure this Linux box as a headless server that I can stash someplace out-of-the-way in my house, then interact with through my laptop. I don't want to have a monitor or keyboard connected to the Linux box. Thanks for your help!

    Read the article

  • Pattern matching gnmap fields with SED

    - by Ovid
    I am testing the regex needed for creating field extraction with Splunk for nmap and think I might be close... Example full line: Host: 10.0.0.1 (host) Ports: 21/open|filtered/tcp//ftp///, 22/open/tcp//ssh//OpenSSH 5.9p1 Debian 5ubuntu1 (protocol 2.0)/, 23/closed/tcp//telnet///, 80/open/tcp//http//Apache httpd 2.2.22 ((Ubuntu))/, 10000/closed/tcp//snet-sensor-mgmt/// OS: Linux 2.6.32 - 3.2 Seq Index: 257 IP ID Seq: All zeros I've used underscore "_" as the delimiter because it makes it a little easier to read. root@host:/# sed -n -e 's_\([0-9]\{1,5\}\/[^/]*\/[^/]*\/\/[^/]*\/\/[^/]*\/.\)_\n\1_pg' filename The same regex with the escape characters removed: root@host:/# sed -n -e 's_\([0-9]\{1,5\}/[^/]*/[^/]*//[^/]*//[^/]*/.\)_\n\1_pg' filename Output: ... ... ... Host: 10.0.0.1 (host) Ports: 21/open|filtered/tcp//ftp///, 22/open/tcp//ssh//OpenSSH 2.0p1 Debian 2ubuntu1 (protocol 2.0)/, 23/closed/tcp//telnet///, 80/open/tcp//http//Apache httpd 5.4.32 ((Ubuntu))/, 10000/closed/tcp//snet-sensor-mgmt/// OS: Linux 9.8.76 - 7.3 Seq Index: 257 IPID Seq: All zeros ... ... ... As you can see, the pattern matching appears to be working - although I am unable to: 1 - match on both the end of line ( comma , and white/tabspace). The last line contains unwanted text (in this case, the OS and TCP timing info) and 2 - remove any of the un-necessary data - i.e. print only the matching pattern. It is actually printing the whole line. If i remove the sed -n flag, the remaining file contents are also printed. I can't seem to locate a way to only print the matched regex. Being fairly new to sed and regex, any help or pointers is greatly appreciated!

    Read the article

  • Chrome caching 302 redirects

    - by Thermionix
    I have a php script with is used to rotate banner images on a site. Under Firefox/IE page refreshes will make another request and a different image will be returned. Under Chrome, the request seems to be cached and only opening the page in a new tab will cause it to actually query the script. I believe this used to work in older versions of chrome, I've tried a few different types of redirect codes all with the same result. Any tips? <img class="banner" src="/inc/banner.php" alt=""> ~$ cat /var/www/inc/banner.php <?php header("HTTP/1.1 302 Redirect"); header("Cache-Control: max-age=0, no-cache, no-store, must-revalidate"); //header('HTTP/1.1 307 Temporary Redirect'); //header("expires: none"); //header("expires: max"); //header("Cache-Control: public"); $folder = '../img/banner/'; $exts = 'jpg jpeg png gif'; $files = array(); $i = -1; if ('' == $folder) $folder = './'; $handle = opendir($folder); $exts = explode(' ', $exts); while (false !== ($file = readdir($handle))) { foreach($exts as $ext) { // for each extension check the extension if (preg_match('/\.'.$ext.'$/i', $file, $test)) { // faster than ereg, case insensitive $files[] = $file; // it's good ++$i; } } } closedir($handle); // We're not using it anymore mt_srand((double)microtime()*1000000); // seed for PHP < 4.2 $rand = mt_rand(0, $i); // $i was incremented as we went along header('Location: '.$folder.$files[$rand]); flush(); ?> curl output; ~$ curl -I -k https://example.net/inc/banner.php HTTP/1.1 302 Redirect Server: nginx/1.1.14 Date: Fri, 24 Feb 2012 03:23:46 GMT Content-Type: text/html Connection: keep-alive X-Powered-By: PHP/5.3.10-1ubuntu1 Cache-Control: max-age=0, no-cache, no-store, must-revalidate Location: ../img/banner/2.jpg

    Read the article

  • ASP.NET sending email through exchange problem

    - by Solmead
    I have an exchange 2010 server running on Windows 2008 R2, I also have a remote webserver running Windows 2003 with multiple sites on it (all asp.net mvc 2 sites). I setup a Transport in exchange and all the websites on my remote web server can send email no problem to anyone in the exchange server and to any external domain. Now for my problem. I am having issues with that webserver, so I moved one of the websites to run on my exchange server, it runs well (low hit website) except that email doesn't work from that site. I tried changing the Transport in exchange to add the IP address of the local machine and the 127.0.0.1 addresses and it still isn't sending any email. Any ideas on how to get this working? The remote websites can still send email no problem, the version of the site that I had to move on the remote server can still email, but on the exchange server for that website email does not send. I would guess it is a Transport issue, since it is running on the same server a firewall shouldn't be the issue. I changed the smtp setting in web.config to localhost, and now I do receive email to my account on the exchange server, but I do not receive any emails on outside addresses. To add more description, this is a custom developed asp.net mvc 2 website. And no errors were being generated in the code when sending the email in either case.

    Read the article

  • Windows XP seemingly out of resources but plenty of free RAM and swap available

    - by Artem Russakovskii
    This one has been bothering me for years and so far I couldn't find an adequate solution. The problem occurs on pretty much every XP install I've done. After opening a variety of programs or the system running existing programs for a while, Windows seemingly runs out of resources, without telling me. There's ALWAYS free RAM. For example, it just happened to me and I had over a gig of free RAM. There are no viruses, spyware, or other nonsense - it is a Windows resource problem, but the question is which resource is it running out of, how does one pinpoint it, and how does one prevent it? Sometimes, this happens after running specific programs - for example, today it happened when I started Photoshop CS4 and Flash CS4 at the same time. I also noticed that restarting The Bat (email client by Ritlabs) seems to get rid of this problem for a while but again, this happens on machines that don't even have The Bat installed. So what does exactly happen? The symptoms are: pressing alt-tab doesn't bring up the list anymore - it just jumps to the next window instantly, very similar to the way Alt-Esc works, however in this case, it's due to not having enough resources to bring up the alt-tab menu random programs would randomly crash, citing random errors, out of memory errors, system resources, inabilities to do system calls, etc. random programs would start missing random parts - for example, Firefox top menus might disappear, pull up partial selections, or not pull up anymore altogether. IE might lose a few of its toolbars. Some programs might fail to redraw or would just plain go gray where the UI used to be. Windows itself never complains about running out of RAM, virtual memory, or anything at all, yet it's running out of something. The only clue I was able to find and apply the fix today was this Desktop Heap Limitation. I haven't confirmed the fix working as not enough time passed. In the meantime, what are everyone's thoughts?

    Read the article

  • Using 1and1.com Servers, SMTP Mail is Limited - Local XAMPP Server Works As Expected

    - by nicorellius
    I'm starting to not like 1and1.com that much. I've used them for years, but mainly for simple sites without much need for configuration. I know there are better hosting companies out and I may go seeking them. The problem here is that on my Local XAMPP server (sitting on a network with Comcast ISP), I have a PHP script that uses PEAR::Mail to send mail using MIME. The script works find locally with either smtp.1and1.com and corresponding credentials and smtp.gmail.com with corresponding credentials, using appropriate ports, etc. 1and1 tells me that I have to change the MX record on the domain where this script runs in order to make this work. This doesn't make sense to me. Now I'm pretty new to all this, but how is it that this is the case? Why can my local server work just fine, out of the box, but their servers not? I have asked them these questions, but they are very vague and I cannot get any good answers from them. Versions: PEAR Version: 1.5.0 PHP Version: 4.4.9 Zend Engine Version: 1.3.0 My apologies in advance for my ignorance. Thanks for the help in advance.

    Read the article

  • Kickstart CentOS 6 prompting for TCP/IP with network set to DHCP

    - by Andy Shinn
    I am trying to stop my kickstart CentOS install prompting me for TCP/IP information. After I click through this prompt (keeping IPv4 and IPv6 to their defaults) the installation continues and completes just fine. Below is my kickstart file: # Andy's super awesome VM kickstart file install url --url=http://mirrors.kernel.org/centos/6/os/x86_64 lang en_US.UTF-8 keyboard us text %include /tmp/network.ks rootpw --iscrypted $6$RA8DyrNTsVJkGIgY$ohZ62HHiOjNnn1yDMZlIu3lQ63D3plGPcbVZtPKE8Oq6Z.IGUgN.kNLkxs/ZymZuluRDWsW2eey5zLOl2G3mp. firewall --service=ssh authconfig --enableshadow --passalgo=sha512 selinux --disabled timezone America/Los_Angeles bootloader --location=mbr --driveorder=vda --append="crashkernel=auto rhgb quiet" # The following is the partition information you requested # Note that any partitions you deleted are not expressed # here so unless you clear all partitions first, this is # not guaranteed to work zerombr clearpart --all --drives=vda --initlabel part /boot --fstype=ext4 --size=500 part pv.253002 --grow --size=1 volgroup vg1 --pesize=4096 pv.253002 logvol / --fstype=ext4 --name=lv_root --vgname=vg1 --grow --size=1024 --maxsize=51200 logvol swap --name=lv_swap --vgname=vg1 --grow --size=4032 --maxsize=4032 repo --name="CentOS" --baseurl=http://mirrors.kernel.org/centos/6/os/x86_64 --cost=100 repo --name="Puppet Labs Products" --baseurl=http://yum.puppetlabs.com/el/6/products/x86_64 repo --name="Puppet Labs Dependencies" --baseurl=http://yum.puppetlabs.com/el/6/dependencies/x86_64 repo --name="EyeFi" --baseurl=http://flexo.eye.fi/6/eye-fi-api %packages @core @server-policy puppet facter %end %pre --erroronfail #!/bin/bash for x in `cat /proc/cmdline`; do case $x in SERVERNAME*) eval $x echo "network --onboot yes --device eth0 --bootproto dhcp --hostname ${SERVERNAME}.eye.fi" /tmp/network.ks ;; esac; done %end %post puppet agent --waitforcert 10 --onetime --no-daemon --pluginsync --server puppet.eye.fi %end reboot My kernel arguments are in this following virt-install command that I use to start the install: virt-install -n zabbix -r 2048 --vcpus=2 -l http://mirrors.kernel.org/centos/6/os/x86_64 --disk /dev/vg_inf1/zabbix --network bridge=br85 --initrd-inject=/home/ashinn/vm_kickstart --extra-args "ks=file:/vm_kickstart SERVERNAME=zabbix" --autostart During the install, I can pull up a console on the second terminal and verify the contents of /tmp/network.ks are: network --onboot=yes --bootproto=dhcp --ipv6=auto --hostname=jenkins2.mydomain.com Why might Anaconda be prompting for the TCP/IP settings when they are already set to DHCP?

    Read the article

  • Command prompt hangs/freezes/crashes sporadically

    - by Leonard Challis
    I'm finding it very difficult to Google this, I don't seem to be able to find anyone with the same issue and I don't know enough about the Windows operating system to troubleshoot. The machine(s) we are seeing the problem on are Windows 7 (professional) both 64bit and 32bit. The problem is with the command prompt freezing up, seemingly randomly. When it does freeze nothing will bring it back to life (i.e. keypress) and it's nothing to do with Quick Insert mode either. It doesn't seem to be when I run standard commands, such as cd, dir, etc, but when I run different programs from the command line. The annoying thing is that sometimes the prompt will freeze and at other times it won't, using the same program/command in the prompt. To add to the frustration, one of my colleagues who had the same problem seems to not have experienced it for a few days now (we're pretty heavy on the command line). It's not a VPN/RDP thing as suggested in other questions and forum posts, as I've seen this both locally and remotely. I thought it was to do with the return code signifying an error or some error state in the program, i.e.: C:\Users\leonardc>mysql -u lalala ERROR 1045 (28000): Access denied for user 'lalala'@'localhost' (using password: NO) but this isn't always the case either. In fact the above command hasn't crashed the shell before. Elevating the prompt to run as Administrator doesn't seem to have any bearing on the problem either. Disabling my anti-virus doesn't have an effect either. Update: I tried the same commands in PowerShell, but I still get the same problem, it will freeze at random times (more often than not, as with the command prompt, but not always). It's not the same as command prompt in the fact that one might work while the other doesn't, but then the next time I try run the same command in both it will suddenly be different again.

    Read the article

  • SQL Server rolling forward lots of transactions, what should I look at?

    - by Anthony D
    I am running SQL Server Express on a Windows XP Embedded box. It runs for a day or two, doing some transactional processing for a POS type system, and with another system pulling data out to an OLAP DB for processing. After a while, I see in the event viewer the sequence SQL Server puts out when it restarts, copy rights, command line parameters, and so on. It seems like that coincides with our OLAP process crashing. I then see that when it restarts our transaction DB, it does a recovery, pulling in 10K or so in transactions that need to be rolled forward. Does this mean SQL has crashed? I don't really see much to indicate what happened. Update 1 I noticed I have my memory limit set to 1MB per query and 2TB for the server. These are the defaults. I only have one GB in the box. We have seen SQL crash a whole box by just using all the system memory. In this case though the whole box is up when we get to it.

    Read the article

  • Mustek 1200 CP driver SFC4.SYS bluescreens with BAD_POOL_HEADER

    - by Slink84
    I have Windows XP SP2. Recently it started bluescreening right after starting up with 'BAD_POOL_HEADER', 0x00000019 error caused by SFC4.SYS driver. After googling for a while I've found out that this is my Mustek's 1200 CP scanner driver. Booting in safe mode and uninstalling it solved the problem... And created another one: now I can't use my scanner. The weird thing is, that it has been working for a while on this PC without any problems. It all started suddenly, and I can't remember installing anything that might have affected it. Reverting to several earlier system restore points didn't help. I've tried re-installing it from the Mustek website, just in case if my copy got corrupted or infected by a virus, but it did not help - it still bluescreens. Also, I've installed Avast and scanned my PC - there were no viruses found. If anyone had such a problem before or has an idea what might have caused it, please help. ED: @Michael Todd: ...try installing on another PC... I've installed it on my friends PC. He has the same OS version, with the latest updates just like mine (he wasn't too happy, even after I've assured him that it is easy to fix by uninstalling that driver :] ). It worked fine - no bluescreens or whatsoever. So I think I've narrowed it down to either BIOS settings, or some wicked driver conflict. Next thing I'm going to try is to re-install XP, or install windows 7. I'm not too happy with a prospect of mucking about with BIOS settings...

    Read the article

  • Problems using wondershaper on KVM guest

    - by Daniele Testa
    I am trying to limit bandwidth on one of my KVM guest using Wondershaper. Doing something like this works fine: wondershaper br23 9000 9000 Doing a wget with the setting above gives a download speed of about 1MB/sec like it should. However, it seems this is the highest setting I can use, because setting it to this does not work: wondershaper br23 10000 10000 Doing the same wget with the setting above downloads with full speed, about 70MB/sec in my case. Running a status-check returns the following: qdisc cbq 1: root refcnt 2 rate 10000Kbit (bounded,isolated) prio no-transmit Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 borrowed 0 overactions 0 avgidle 12500 undertime 0 qdisc sfq 10: parent 1:10 limit 127p quantum 1514b divisor 1024 perturb 10sec Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 qdisc sfq 20: parent 1:20 limit 127p quantum 1514b divisor 1024 perturb 10sec Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 qdisc sfq 30: parent 1:30 limit 127p quantum 1514b divisor 1024 perturb 10sec Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 qdisc ingress ffff: parent ffff:fff1 ---------------- Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 class cbq 1: root rate 10000Kbit (bounded,isolated) prio no-transmit Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 borrowed 0 overactions 0 avgidle 12500 undertime 0 class cbq 1:1 parent 1: rate 10000Kbit (bounded,isolated) prio 5 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 borrowed 0 overactions 0 avgidle 12500 undertime 0 class cbq 1:10 parent 1:1 leaf 10: rate 10000Kbit prio 1 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 borrowed 0 overactions 0 avgidle 12500 undertime 0 class cbq 1:20 parent 1:1 leaf 20: rate 9000Kbit prio 2 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 borrowed 0 overactions 0 avgidle 12500 undertime 0 class cbq 1:30 parent 1:1 leaf 30: rate 8000Kbit prio 2 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 borrowed 0 overactions 0 avgidle 12500 undertime 0 What am I doing wrong? Does wondershaper have some kind of upper limit?

    Read the article

  • Web filtering (Proxy or DNS) with option for users to ignore the block

    - by Jon Rhoades
    We are struggling with our users visiting infected or "attack" sites and Phising in general. Most of our machines are protected by an Enterprise anti virus and monitoring solution (McAffe ePO) and we try to get people to use Firefox... But no AV is perfect and we have to endure personal machines as well (albeit on their own 'Plague' VLANs) and would like to do something about Phishing as our users seem intent on disclosing their passwords to the world... To complicate matters we don't want to implement a block for many many reasons instead we would like to implement something akin to Firefox's "Reported Scam/Phish/Attack Site" - "Get me out of here" or crucially "Let me in anyway", giving the user a choice to still infect themselves if they feel like it (or look at a site incorrectly blacklisted). The reason we can't just use Firefox is we have a core enterprise App only certified on IE6&7 - thank you Oracle. Is it possible to implement this type of advisory filtering either using a proxy (in our case Squid) or DNS? http://serverfault.com/questions/15801/what-free-options-are-available-for-web-content-filtering http://serverfault.com/questions/47520/open-source-filtering-of-https-traffic Were a good start, but they don't address the advisory aspect of the filtering.

    Read the article

  • Security for university research lab systems

    - by ank
    Being responsible for security in a university computer science department is no fun at all. And I explain: It is often the case that I get a request for installation of new hw systems or software systems that are really so experimental that I would not dare put them even in the DMZ. If I can avoid it and force an installation in a restricted inside VLAN that is fine but occasionally I get requests that need access to the outside world. And actually it makes sense to have such systems have access to the world for testing purposes. Here is the latest request: A newly developed system that uses SIP is in the final stages of development. This system will enable communication with outside users (that is its purpose and the research proposal), actually hospital patients not so well aware of technology. So it makes sense to open it to the rest of the world. What I am looking for is anyone who has experience with dealing with such highly experimental systems that need wide outside network access. How do you secure the rest of the network and systems from this security nightmare without hindering research? Is placement in the DMZ enough? Any extra precautions? Any other options, methodologies?

    Read the article

  • Firefox 3.6 and above. Always show one tab even if all tabs are closed like Firefox 3.0

    - by Jayapal Chandran
    I very much got used to Firefox 3.0. In that, to free the memory, I close all tabs but still the main Firefox window does not close. But in Firefox 3.6 and later if I want to close all tabs and if I do so then Firefox totally exists. This is not the case with 3.0. How to stop Firefox from not closing the main process even if I close all threads (tabs)? The autocomplete feature in the address bar of Firefox 3.6 and greater is in a dark blue color which makes me very much annoyed. With my environment and the monitor glare that is inducing anger in me, so how the color be changed to be like Firefox 3.0? Because you know that black and white are a neutral and good combination and since I have been working in Firefox 3.0 (and earlier versions) for a long time this new color change and other uncomfortable options are making me sick. To check CSS3 I need to use Firefox 3.5 and greater. Besides I like Firefox because it includes the W3C's recommendations so I can learn and test new recomendations from W3C.

    Read the article

  • How to deal with the extremely big *.ost files in a Terminal Server environment which is running out of space

    - by Wolfgang Kuehne
    Our Terminal Server is running out of hard disk space, and the major files which occupy most of the space are *.ost files of the Outook, which come form the users which use the Terminal Server all the time through remote desktop. The Outlook is installed on the Terminal Server and various users can use it. What would be a solution in this case. Is there a way to limit the size of the *.ost files? I read in forums that having the Outlook 2010 set up in Cached Exchange Mode isn't the best practice for an environment where the hdd space is a major constraint. First thing that came to my mind is using folder redirection, and place the ost files (together with the AppData forlder) in a network share, but this does not help, because the ost files are saved a part of the AppData folder which can not be redirected. Then I thought if it is possible to limit the size of the ost file? Or limit the time that it keeps emailed cached, say just emails from the last 6 months are sufficient. Another solutions that came to my mind, moving the ost files somewhere else, this required the old ost file to be removed, and creation of a new one. I am not quite sure if the new OST file will still have cached the emails which where available in the old ost, or will it start caching from where the other one left. What do you suggest?

    Read the article

  • Running multiple sites on a LAMP with secure isolation

    - by David C.
    Hi everybody, I have been administering a few LAMP servers with 2-5 sites on each of them. These are basically owned by the same user/client so there are no security issues except from attacks through vulnerable deamons or scripts. I am builing my own server and would like to start hosting multiple sites. My first concern is... ISOLATION. How can I avoid that a c99 script could deface all the virtual hosts? Also, should I prevent that c99 to be able to write/read the other sites' directories? (It is easy to "cat" a config.php from another site and then get into the mysql database) My server is a VPS with 512M burstable to 1G. Among the free hosting managers, is there any small one which works for my VPS? (which maybe is compatible with the security approach I would like to have) Currently I am not planning to host over 10 sites but I would not accept that a client/hacker could navigate into unwanted directories or, worse, run malicious scripts. FTP management would be fine. I don't want to complicate things with SSH isolation. What is the best practice in this case? Basically, what do hosting companies do to sleep well? :) Thanks very much! David

    Read the article

  • Is there a free XML viewer to do "pivot tables"

    - by FumbleFingers
    I have an *.xml file on a PC running Windows XP, with a structure something like... <movies id="my movies" <movie name="Unforgiven" <people star="Clint Eastwood" director="Clint Eastwood"/> </movie> <movie name="A Fistful of Dollars" <people star="Clint Eastwood" director="Sergio Leone"/> </movie> <movie name="J Edgar" <people star="Leonardo DiCaprio" director="Clint Eastwood"/> </movie> </movies> I want to open this file in a viewer utility which will show one line per movie, filtered by a condition such as director="Clint Eastwood", including the relevant value for movie name on each line. Note - that's just an example. My actual file has thousands of lines, each possibly several hundred bytes long, and there are many more levels and named values. The important thing is I want to apply a filter for a specified value of some field at some level. And I want to see one line for every case where that value occurs, showing the values for any fields I specify, even if they're stored at higher levels. I may be mistaken in saying what I want is a "pivot table" - I don't know what I should call it.

    Read the article

  • How to manage mounted partitions (fstab + mount points) from puppet

    - by Cristian Ciupitu
    I want to manage the mounted partitions from puppet which includes both modifying /etc/fstab and creating the directories used as mount points. The mount resource type updates fstab just fine, but using file for creating the mount points is bit tricky. For example, by default the owner of the directory is root and if the root (/) of the mounted partition has another owner, puppet will try to change it and I don't want this. I know that I can set the owner of that directory, but why should I care what's on the mounted partition? All I want to do is mount it. Is there a way to make puppet not to care about the permissions of the directory used as the mount point? This is what I'm using right now: define extra_mount_point( $device, $location = "/mnt", $fstype = "xfs", $owner = "root", $group = "root", $mode = 0755, $seltype = "public_content_t" $options = "ro,relatime,nosuid,nodev,noexec", ) { file { "${location}/${name}": ensure => directory, owner => "${owner}", group => "${group}", mode => $mode, seltype => "${seltype}", } mount { "${location}/${name}": atboot => true, ensure => mounted, device => "${device}", fstype => "${fstype}", options => "${options}", dump => 0, pass => 2, require => File["${location}/${name}"], } } extra_mount_point { "sda3": device => "/dev/sda3", fstype => "xfs", owner => "ciupicri", group => "ciupicri", $options = "relatime,nosuid,nodev,noexec", } In case it matters, I'm using puppet-0.25.4-1.fc13.noarch.rpm and puppet-server-0.25.4-1.fc13.noarch.rpm.

    Read the article

  • Error when starting qpidd as a service

    - by Sparks
    I have recently swapped from CENTOS 5 to FEDORA 17. Previously I have created my own init.d scripts successfully (albeit not for qpidd) however, in FEDORA I cannot get it to work. I have created the following script (called qpidd) in the init.d directory: #!/bin/bash # # /etc/rc.d/init.d/qpidd # # QPID/AMQP Broker scripts # # # chkconfig: 2345 20 80 # description: QPID/AMQP Broker service # processname: qpidd # pidfile: /var/lock/subsys/qpidd # Source function library. . /etc/init.d/functions SERVICENAME=qpidd start() { echo -n "Starting $SERVICENAME: " daemon qpidd -d & retval=$? touch /var/lock/subsys/$SERVICENAME return $retval } stop() { echo -n "Shutting down $SERVICENAME: " qpidd -q & retval=$? rm -f /var/lock/subsys/$SERVICENAME return $retval } case "$1" in start) start ;; stop) stop ;; status) status qpidd ;; restart) stop start ;; condrestart) [ -f /var/lock/subsys/<service> ] && restart || : ;; *) echo "Usage: $SERVICENAME {start|stop|status|restart" exit 1 ;; esac exit $? After this, I ran chkconfig --add qpidd, however, now when I run sudo service qpidd start I get the following message: Starting qpidd (via systemctl): Job failed. See system journal and 'systemctl status' for details. If I then run systemctl status qpidd I get the following message: Failed to issue method call: Unit name qpidd is not valid. I am now lost, I have search the web and Stack Overflow but cannot find anybody with similar problem, any help or direction to a website that can help would be much appreciated Sparks :)

    Read the article

  • Analyze a BSOD (irql_less_than_or_equal)

    - by Bruno Reis
    Hello. About 2 months ago I bought a new system and built it at home: Mother board: XFX X58i Processor: Core i7 920, using the stock cooler Memory: 3x2GB Corsair DDR3 1600 Video card: NVIDIA GTS 250 (1GB) Hard disk: 2x WD 500GB, 7200rpm I have 2 screens plugged into the video card, and the system is connected to a 550W PSU. Nothing is overclocked. After building the system, I stressed it a lot with Prime95 and rthdribl to check its stability. All my tests were perfect. So I reinstalled Win 7 x64 Professional and started using it normally. The first week (2010-03-15) I got the infamous irql_less_than_or_equal BSOD. Ten days after (2010-03-24) I got another one. Then on 2010-04-09, 2010-05-04. Since 2 days ago it became worse: I got one bluescreen per day! (2010-05-12, 2010-05-13, 2010-05-14). I installed BlueScreenView to try to obtain some information, but I'm not able to extract any useful information apart from the bug check string (irql_less_than_or_equal), and that it was caused by ntoskrnl.exe (the first three at ntoskrnl.exe+71f00, the last 4 at ntoskrnl.exe+70600 -- which I suspect could be the same thing, as Microsoft could have patched this file in the mean time, so the address of the function causing it changed). Then I stressed my memory sticks with memtest, they worked perfectly. After booting, I've stressed my GPU with FurMark and RTHDRIBL, everything was fine. Then I stressed the CPU with 4 instances of Prime95 while monitoring the temperature -- that never exceeded 85oC with the case closed --, everything fine. Finally I've stressed the whole system with HeavyLoad for a looooong time, everything worked just fine. So, I have stressed most of the components of the system, but couldn't get any useful information from it. Do you have any hint on what else can I do to find the culprit? Thanks Bruno

    Read the article

< Previous Page | 641 642 643 644 645 646 647 648 649 650 651 652  | Next Page >