Search Results

Search found 27946 results on 1118 pages for 'output buffer empty'.

Page 716/1118 | < Previous Page | 712 713 714 715 716 717 718 719 720 721 722 723  | Next Page >

  • IPtables: DNAT not working

    - by GetFree
    In a CentOS server I have, I want to forward port 8080 to a third-party webserver. So I added this rule: iptables -t nat -A PREROUTING -p tcp --dport 8080 -j DNAT --to-destination thirdparty_server_ip:80 But it doesn't seem to work. In an effort to debug the process, I added these two LOG rules: iptables -t mangle -A PREROUTING -p tcp --src my_laptop_ip --dport ! 22 -j LOG --log-level warning --log-prefix "[_REQUEST_COMING_FROM_CLIENT_] " iptables -t nat -A POSTROUTING -p tcp --dst thirdparty_server_ip -j LOG --log-level warning --log-prefix "[_REQUEST_BEING_FORWARDED_] " (the --dport ! 22 part is there just to filter out the SSH traffic so that my log file doesn't get flooded) According to this page the mangle/PREROUTING chain is the first one to process incomming packets and the nat/POSTROUTING chain is the last one to process outgoing packets. And since the nat/PREROUTING chain comes in the middle of the other two, the three rules should do this: the rule in mangle/PREROUTING logs the incomming packets the rule in nat/PREROUTING modifies the packets (it changes the dest IP and port) the rule in nat/POSTROUTING logs the modified packets about to be forwarded Although the first rule does log incomming packets comming from my laptop, the third rule doesn't log the packets which are supposed to be modified by the second rule. It does log, however, packets that are produced in the server, hence I know the two LOG rules are working properly. Why are the packets not being forwarded, or at least why are they not being logged by the third rule? PS: there are no more rules than those three. All other chains in all tables are empty and with policy ACCEPT.

    Read the article

  • How to make a Linux software RAID1 detect disc corruption?

    - by Paul
    This is one of the nightmare days: A virtualized server running on a Linux SW-RAID1 runs a VM that exhibits random segfaults in seemingly random codechunks. While debugging I find that a file gives different md5sums on each and every run. Digging deeper I find this: The raw disc partitions that make up the RAID1 mirror contain 2 bit-differences and ca. 9 sectors are completely empty on one disc and filled with data on the other disc. Obviously Linux gives back a sector from a undeterministically chosen disc of the mirror set. So sometimes the same sector is returned OK, sometimes the corrupted is given back. The docs say: RAID cannot and is not supposed to guard against data corruption on the media. Therefore, it doesn't make any sense either, to purposely corrupt data (using dd for example) on a disk to see how the RAID system will handle that. It is most likely (unless you corrupt the RAID superblock) that the RAID layer will never find out about the corruption, but your filesystem on the RAID device will be corrupted. Thanks. That will help me sleep. :-/ Is there a way to have Linux at least detect this corruption by using sector checksumming or something like that? Would this be detected in a RAID5 setup? Is this the moment I wish I used ZFS or btrfs (once it becomes usable without uber-admin capabilities)?

    Read the article

  • Folder redirection GPO doesn't seem to be working

    - by homli322
    I've been trying to set up roaming profiles and folder redirection, but have hit a bit of a snag with the latter. This is exactly what I've done so far: (I have OU permissions and GPO permissions over my division's OU.) Created a group called Roaming-Users in the OU 'Groups' Added a single user (testuser) to the group Using the Group Policy Management tool (via RSAT on Windows 7) I right-clicked on the Groups OU and selected 'Create a GPO in this domain, and Link it here' Added my 'Roaming-Users' group to the Security Filtering section of the policy. Added the Folder Redirection option, specifically for Documents. It is set to redirect to: \myserver\Homes$\%USERNAME%\Documents (Homes$ exists and is sharing-enabled). Right-clicked on the policy under the Groups OU and checked Enforced. Logged into a machine as testuser successfully. Created a simple text file, saved some gibberish, logged off. Remoted into the server with Homes$ on it, noticed that the directory Homes$\testuser was created, but was empty. No text file to be found. From what I've read, I did everything I aught to...but I can't quite figure out the issue. I had no errors when I logged off about syncing issues (offline files is enabled) or anything, so I can only imagine my file should have ended up up on the share. Any ideas? EDIT: Using gpresult /R, I confirmed the user is in fact part of the Roaming-Users group, but does not have the policy applied, if that helps. EDIT 2: Apparently you can't apply GPOs to groups...so I applied to users and used the same security filter to limit it to my test user. Nothing happens as far as redirection goes, but I now have the following error in the event log: Folder redirection policy application has been delayed until the next logon because the group policy logon optimization is in effect

    Read the article

  • ssh_exchange_identification: Connection closed by remote host

    - by rick
    Firstly, I know that this question has been asked a million times, and I have read everything I can find and still cannot fix the problem. i am encountering this issue when ssh'ing in from my mac to my Ubuntu server on a fresh install of Ubuntu (I reinstalled because of this issue). I have SSH portmapped to 7070 because my ISP is blocking 22. On the client: bash: ssh -p 7070 -v [email protected] debug1: Reading configuration data /etc/ssh_config debug1: Connecting to address.org port 7070. debug1: Connection established. debug1: identity file /home/me/.ssh/identity type -1 debug1: identity file /home/me/.ssh/id_rsa type 1 debug1: identity file /home/me/.ssh/id_dsa type -1 ssh_exchange_identification: Connection closed by remote host Here's what I have done to try to resolve the issue: Made sure my maxstartups is ok: bash: grep MaxStartups /etc/ssh/sshd_config #MaxStartups 10:30:60 Made sure hosts.deny is clear of denials. Made sure hosts.allow has my client IP. Clear out known_hosts on client Changed ownership of /var/run to root Made sure etc/run/ssh is Made sure /var/empty exists Reinstall openssh-server Reinstall ubuntu When I run telnet localhost, I get this: telnet localhost Trying ::1... Trying 127.0.0.1... telnet: Unable to connect to remote host: Connection refused When I run /usr/sbin/sshd -t Could not load host key: /etc/ssh/ssh_host_rsa_key Could not load host key: /etc/ssh/ssh_host_dsa_key When I regenerate the keys with ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key ssh-keygen -t dsa -f /etc/ssh/ssh_host_dsa_key I get the same error. I am pretty sure this is the issue. Can anyone help?

    Read the article

  • Removing/modifying LDAP objectclasses/attributes using olc

    - by Foezjie
    I'm having trouble using openldap's olc to modify a schema without shutting down the server. To test some things out, I made the following schema: objectIdentifier tests orgUlyssisOID:4 objectIdentifier testAttribute tests:1 objectIdentifier testObjectClass tests:2 attributeType ( testAttribute:1 NAME 'attr1' DESC 'attribuut 1' SYNTAX '1.3.6.1.4.1.1466.115.121.1.40' ) attributeType ( testAttribute:2 NAME 'attr2' DESC 'attribuut 2' SUP userPassword SINGLE-VALUE ) objectclass ( testObjectClass:1 NAME 'class1' DESC 'objectclass 1' SUP top STRUCTURAL MUST (attr1 $ attr2 ) ) And added it to a new schema called test. (cn={9}test.ldif in cn=schema). Now I can't seem to figure out how to delete class1 from that schema. I use the following LDIF (and tried lots of variations too, to no avail) dn : cn={9}test,cn=schema,cn=config changetype: modify delete: olcObjectClasses olcObjectClasses: ( testObjectClass:1 NAME 'class1' DESC 'objectclass 1' SUP top STRUCTURAL MUST ( attr1 $ attr2 ) ) Running ldapmodify -x -W -D cn=admin,cn=config -f test.ldif -d 0 gives no output. -d 1 gives this: ldap_create ldap_sasl_bind ldap_send_initial_request ldap_new_connection 1 1 0 ldap_int_open_connection ldap_connect_to_host: TCP localhost:389 ldap_new_socket: 4 ldap_prepare_socket: 4 ldap_connect_to_host: Trying 127.0.0.1:389 ldap_pvt_connect: fd: 4 tm: -1 async: 0 ldap_open_defconn: successful ldap_send_server_request ber_scanf fmt ({it) ber: ber_scanf fmt ({i) ber: ber_flush2: 38 bytes to sd 4 ldap_result ld 0x7f2a8ccf3430 msgid 1 wait4msg ld 0x7f2a8ccf3430 msgid 1 (infinite timeout) wait4msg continue ld 0x7f2a8ccf3430 msgid 1 all 1 ** ld 0x7f2a8ccf3430 Connections: * host: localhost port: 389 (default) refcnt: 2 status: Connected last used: Mon Sep 10 11:29:57 2012 ** ld 0x7f2a8ccf3430 Outstanding Requests: * msgid 1, origid 1, status InProgress outstanding referrals 0, parent count 0 ld 0x7f2a8ccf3430 request count 1 (abandoned 0) ** ld 0x7f2a8ccf3430 Response Queue: Empty ld 0x7f2a8ccf3430 response count 0 ldap_chkResponseList ld 0x7f2a8ccf3430 msgid 1 all 1 ldap_chkResponseList returns ld 0x7f2a8ccf3430 NULL ldap_int_select read1msg: ld 0x7f2a8ccf3430 msgid 1 all 1 ber_get_next ber_get_next: tag 0x30 len 12 contents: read1msg: ld 0x7f2a8ccf3430 msgid 1 message type bind ber_scanf fmt ({eAA) ber: read1msg: ld 0x7f2a8ccf3430 0 new referrals read1msg: mark request completed, ld 0x7f2a8ccf3430 msgid 1 request done: ld 0x7f2a8ccf3430 msgid 1 res_errno: 0, res_error: <>, res_matched: <> ldap_free_request (origid 1, msgid 1) ldap_parse_result ber_scanf fmt ({iAA) ber: ber_scanf fmt (}) ber: ldap_msgfree ldap_free_connection 1 1 ldap_send_unbind ber_flush2: 7 bytes to sd 4 ldap_free_connection: actually freed So no real indication of an error. Where am I doing it wrong? Bonus question: If I have some entries of a certain objectclass, can I modify it (add/remove attributeTypes) without removing the entries? Thanks in advance for all help.

    Read the article

  • memory tuning with rails/unicorn running on ubuntu

    - by user970193
    I am running unicorn on Ubuntu 11, Rails 3.0, and Ruby 1.8.7. It is an 8 core ec2 box, and I am running 15 workers. CPU never seems to get pinned, and I seem to be handling requests pretty nicely. My question concerns memory usage, and what concerns I should have with what I am seeing. (if any) Here is the scenario: Under constant load (about 15 reqs/sec coming in from nginx), over the course of an hour, each server in the 3 server cluster loses about 100MB / hour. This is a linear slope for about 6 hours, then it appears to level out, but still maybe appear to lose about 10MB/hour. If I drop my page caches using the linux command echo 1 /proc/sys/vm/drop_caches, the available free memory shoots back up to what it was when I started the unicorns, and the memory loss pattern begins again over the hours. Before: total used free shared buffers cached Mem: 7130244 5005376 2124868 0 113628 422856 -/+ buffers/cache: 4468892 2661352 Swap: 33554428 0 33554428 After: total used free shared buffers cached Mem: 7130244 4467144 2663100 0 228 11172 -/+ buffers/cache: 4455744 2674500 Swap: 33554428 0 33554428 My Ruby code does use memoizations and I'm assuming Ruby/Rails/Unicorn is keeping its own caches... what I'm wondering is should I be worried about this behaviour? FWIW, my Unicorn config: worker_processes 15 listen "#{CAPISTRANO_ROOT}/shared/pids/unicorn_socket", :backlog = 1024 listen 8080, :tcp_nopush = true timeout 180 pid "#{CAPISTRANO_ROOT}/shared/pids/unicorn.pid" GC.respond_to?(:copy_on_write_friendly=) and GC.copy_on_write_friendly = true before_fork do |server, worker| STDERR.puts "XXXXXXXXXXXXXXXXXXX BEFORE FORK" print_gemfile_location defined?(ActiveRecord::Base) and ActiveRecord::Base.connection.disconnect! defined?(Resque) and Resque.redis.client.disconnect old_pid = "#{CAPISTRANO_ROOT}/shared/pids/unicorn.pid.oldbin" if File.exists?(old_pid) && server.pid != old_pid begin Process.kill("QUIT", File.read(old_pid).to_i) rescue Errno::ENOENT, Errno::ESRCH # already killed end end File.open("#{CAPISTRANO_ROOT}/shared/pids/unicorn.pid.ok", "w"){|f| f.print($$.to_s)} end after_fork do |server, worker| defined?(ActiveRecord::Base) and ActiveRecord::Base.establish_connection defined?(Resque) and Resque.redis.client.connect end Is there a need to experiment enforcing more stringent garbage collection using OobGC (http://unicorn.bogomips.org/Unicorn/OobGC.html)? Or is this just normal behaviour, and when/as the system needs more memory, it will empty the caches by itself, without me manually running that cache command? Basically, is this normal, expected behaviour? tia

    Read the article

  • Not able to access external Hard disk

    - by Jash Jacob
    I have a 1TB External Hard drive which I'm currently not able to access. When I open the External drive in Finder, It shows it's empty. When I use the option to "Get Info", I get the dialog box stating it has about 300GB Free. Tried to get into the External Drive using Terminal, I had no luck. Checking in Disk Utility, It showed that I have many number of files but ZERO folder. I tried to "repair disk", in the process the external Drive got unmounted in between the process. I checked this drive on Windows. I was able to open almost all the folders but I wasn't able to copy anything onto the external drive. One folder caused my windows computer to hang, So i connected the drive back onto my MacBook Pro and tried to access the drive through terminal (this time it worked!) and then I tried to delete the folder with rm command, I got an "input/output error" What should i do to recover the files in that folder? How can i access my external drive on my mac

    Read the article

  • How important is dual-gigabit lan for a super user's home NAS?

    - by Andrew
    Long story short: I'm building my own home server based on Ubuntu with 4 drives in RAID 10. Its primary purpose will be NAS and backup. Would I be making a terrible mistake by building a NAS Server with a single Gigabit NIC? Long story long: I know the absolute max I can get out of a single Gigabit port is 125MB/s, and I want this NAS to be able to handle up to 6 computers accessing files simultaneously, with up to two of them streaming video. With Ubuntu NIC-bonding and the performance of RAID 10, I can theoretically double my throughput and achieve 250MB/s (ok, not really, but it would be faster). The drives have an average read throughput of 83.87MB/s according to Tom's Hardware. The unit itself will be based on the Chenbro ES34069-BK-180 case. With my current hardware choices, it'll have this motherboard with a Core i3 CPU and 8GB of RAM. Overkill, I know, but this server will be doing other things as well (like transcoding video). Unfortunately, the only Mini-ITX boards I can find with dual-gigabit and 6 SATA ports are Intel Atom-based, and I need more processing power than an Atom has to offer. I would love to find a board with 6 SATA ports and two Gigabit LAN ports that supports a Core i3 CPU. So far, my search has come up empty. Thus, my dilemma. Should I hold out for such a board, go with an Atom-based solution, or stick with my current single-gigabit configuration? I know there are consumer NAS units with just one gigabit interface (probably most of them), but I think I will demand a lot more from my server than the average home user. Any advice is appreciated. Thanks.

    Read the article

  • Understanding the Mounting of a Filesystem

    - by Tom H.
    I'm new to linux and want to check my understanding of how mounting/filesystems work. I read related manpages, but just want to be sure. I have a partition say /dev/sda5 that is currently mounted to /home with various subdirs. It is my understanding that this means /dev/sda5 has its own portable filesystem that can be moved anywhere in the main filesystem. Questions: If I unmount /dev/sda5 from /home (# umount /home) and then mount it to /var/www/ (which is empty) (# mount -t ext3 /dev/sda5 /var/www) and replace the fstab entry, with /dev/sda5 /var/www ext3 defaults,noatime,nodev 1 2 and # mount -a, Q1) are all of the contents of /home now accessible under /var/www/ (i.e. /home/username -> /var/www/username)? Q2) Are all of the permissions from the /home filesystem kept intact in this new location? Anything else I should be concerned with? Just want to make sure I don't go wipe/corrupt anything. Coming from Windows the filesystem architecture takes getting used to (though I'm loving the flexibility!).

    Read the article

  • Blending Background for Polar Distortion in GIMP

    - by Chris S
    I followed a tutorial to perform a polar distortion on a panoramic image. The instructions are geared for Photoshop but seem to mostly apply to GIMP as well. The only thing I couldn't really figure out was how they were able to automatically fill in the area around the circle by "extending" the border of the circle. e.g. In GIMP, performing the polar distortion leaves a black white canvas around the circle, not the attractive blended background shown in the tutorial. Is there an easy way to implement this? The only way I found was to reserve half of the "square" as blank canvas and then manually copy the image's top row of pixels over this empty portion. Then, after the polar distortion, I crop out the extra area. Although this achieves the effect, it seems a bit awkward. How do you stretch selections? Ideally, I just want to select the top row and stretch it vertically until it fills in half of the canvas. Instead I had to manaully copy, paste, translate, etc.

    Read the article

  • Is there a realiable way to troubleshoot wake up from sleep in Windows?

    - by Borek
    Is there a reliable way to troubleshoot laptop wakes? I've seen "heuristics" posted here and there but isn't there really a simple and deterministic way to tell what's causing a problem? Specifically, my laptop wakes up about every hour for about 2 minutes. Exported event log entries are here: http://www.mediafire.com/?abcqb00v5wyo6pj. I've tried: powercfg -devicequery wake_armed Empty result set. Scheduled tasks - the main ones are not scheduled to run every hour. When go through a long list of all possible tasks, there are some that are set to be triggered every hour (e.g., MS "RacTask" whatever it is). But when I go to power options, Advanced settings, Sleep, Allow wake timers it is set to "Disable". Also, the specific task is not set to wake the computer if necessary. Power options for my Ethernet card don't enable it to wake the computer - the cable is disconnected anyway. There are no other HW devices attached - no USB disks, no keyboards / mice etc. I am really clueless and quite unhappy that it's so hard to troubleshoot this situation.

    Read the article

  • Control Panel as menu includes a blank item

    - by Matthew Ferreira
    When viewed as a menu attached to the Start Menu in Windows 7 Ultimate x64, the Control Panel contains a blank item. It looks like this: This item cannot be deleted or removed. I also cannot create a shortcut to it. No error message is displayed, instead simply nothing happens. I've tried using Shell Object Editor (using Run as Administrator) to find out if there is an errant entry on the Control Panel, but many entries (almost two dozen) are blank. There are several valid entries as well. I've looked through the registry and through C:\Windows, \system32, and \SysWOW64 but have had no success. I looked at this question, but I am not using Windows XP and thus have no option to use Tweak UI's Rebuild Icons function. Please note that this is no empty entry in the Control Panel when opened normally, only when attached to the Start Menu as a menu. I have compared the list of entries on the attached menu to the normal Control Panel and other than the blank entry, they are exactly the same. Nothing is missing from one or the other. I've also compared the menu and the normal view to reference images and lists of Control Panel items and have found no irregularities. Is anyone familiar with this problem or know of a solution? I've performed virus and malware scans and found nothing. I've used CCleaner with no change. Nothing with Shell Object Editor. Nothing with Registry Editor. Certainly someone here knows how to fix this. My only guess is the many blank entries visible in Shell Object Editor, but I am reluctant to delete that many items without further analysis and guidance. I appreciate your time and consideration.

    Read the article

  • How to clone MySQL DB? Errors with CREATE VIEW/SHOW VIEW privileges

    - by user38071
    Running MySQL 5.0.32 on Debian 4.0 (Etch). I'm trying to clone a WordPress MySQL database completely (structure and data) on the same server. I tried doing a dump to a .sql file and an import into a new empty database from the command line, but the import fails with errors saying the user does not have the "SHOW VIEW" or "CREATE VIEW" privilege. Trying it with PHPMyAdmin doesn't work either. I also tried doing this with the MySQL root user (not named "root" though) and it shows an "Access Denied" error. I'm terribly confused as to where the problem is. Any pointers on cloning a MySQL DB and granting all privileges to a user account would be great (specifically for MySQL 5.0.32). The SHOW GRANTS command for the existing user works (the one who has privileges over the source database). It shows that the user has all privileges granted. I created a new user and database. Here's what I see with the grant commands. $ mysql -A -umyrootaccount --password=myrootaccountpassword mysql> grant all privileges on `newtarget_db`.* to 'newtestuser'@'localhost'; ERROR 1044 (42000): Access denied for user 'myrootaccount'@'localhost' to database 'newtarget_db' mysql> grant all privileges on `newtarget_db`.* to 'existingsourcedbuser'@'localhost'; ERROR 1044 (42000): Access denied for user 'myrootaccount'@'localhost' to database 'newtarget_db'

    Read the article

  • How to reinstall Windows 7 Embedded?

    - by Joshua Lim
    I need to reinstall Windows 7 Embedded on my server but I'm not able to do so despite repeated tries. I tried booting up the server with the Windows Embedded 7 Setup ISO attached (using IPMI) and I've also tried running setup.exe in the CDROM after Windows has booted up. Both methods fail. In the first case, the server simply reboots by itself after I selected "IBW" button. In the second case, the installer returns some files missing while installing. I'm sure my Windows Embedded 7 Setup ISO is correct, because earlier on, I used IBW on the same ISO to install Windows Embedded 7 onto the server. Of course, the C drive has empty when I first installed. What should I do? I read that the normal Windows 7 (not embedded version) installer allows you to reformat the C drive before re installing. There does not appear to be such an option for Windows embedded. Appreciate any tip. Thanks.

    Read the article

  • Blank list of windows services

    - by Joe
    Recently when I open windows services (always as administrator) I get a blank list of services: When I try and click on one of the empty lines I get this "Script Error" message: This happens over and over again, after several times I restarted my computer. I can't pinpoint exactly when this started happening or if I made any specific changes to my computer at that time. Someone told my to try running scf /scannow as administrator, but when I try to do that the scan stops at 34% and I get the message: "Windows Resource Protection could not perform the requested operation." I am running Windows 7 Enterprise 64 bit, and I would really like to avoid reinstalling windows. Does anyone know how to fix this? Edit - Here is another attempt I made and some more information that might help: Following WhoIsRich's suggestion, I tried the command sfc /scannow /offbootdir=c:\ /offwindir=c:\windows. This gave the error message "The arguments passed to sfc are invalid. The offline windows directory specified points to the online system", and then I realized this command is meant to be run after booting from another system. Since I don't have my windows installation disk right now, I used my own system to create a recovery disk, and then restarted my computer and used the recovery disk to boot. I then ran the above command, and I got the following message: "Windows Resource Protection found corrupt files but was unable to fix some of them. Details are included in the CBS.log". I then restarted my computer and let it boot up normally. The problem with windows services persists, and the CBS.log file is a long log file with many entries, and I don't know if there is useful information in it, and if there is, how to find it.

    Read the article

  • Why is the Windows 8 recycle bin using more space than it is allocated?

    - by oldmankit
    I ran WinDirStat to scan the contents of my hard drive. I was surprised to see that the $RECYCLE.BIN folder on my D: drive takes 26 GB of space. I emptied the recycle bin, refreshed the folder in WinDirStat, but it still takes 26 GB of space. I reduced the Maximum size of the recycle bin for this drive to 10000 MB for the main user of this computer, and disabled the recycle bin for the other user, and refreshed the folder in WinDirStat, but it still takes 26 GB of space. I ran (in an elevated window) rd /s D:\$Recycle.bin, and refreshed the folder in WinDirStat, and finally it became empty. Why was it taking up space even after I emptied it? Why was it taking more space (26 GB) than the maximum allowed amount (10 GB)? Update: After six months of using Windows (no re-install and no changes of settings related to the Recycle Bin) I used WinDirStat to check how big D:\$RECYCLE.BIN has become. It is now 29 GB. In Recycle Bin Properties, I select drive D, and it is still a custom maximum size (10000 MB).

    Read the article

  • Ubuntu 12.04 open port 80 inside WLAN

    - by Eduard
    I have an nginx server running on ubuntu 12.04 that serves http through port 80 and https through port 443. Everything works fine if I access it from the same computer via localhost, 127.0.0.1 or the local IP 192.168.0.11. If I try to access the server from another computer in the same VLAN it does not work for http; it works for https. I have changed my nginx configuration to also listen to port 8000 for http; I can then access http from the other computer in the same VLAN via "http://192.168.0.11:8000". I also have a web server running on port 80 on a windows machine and can access it from another device in the same VLAN, therefore the router is not blocking incoming http traffic. The nginx process is run by root. I have used tcpdump and I see that packets are arriving to Ubuntu: 192.168.0.16.49735 192.168.0.11.80 and that some response is being given 192.168.0.11.80 192.168.0.16.49735 (I do not know what the response is though). There is no request arriving at the nginx web server (I have checked the access log). I have iptables empty. I have unsuccessfully tried to find a solution for a long time to this, it has now become a matter of happiness or bitterness :).

    Read the article

  • Mechanism behind user forwarding in ScriptAliasMatch

    - by jolivier
    I am following this tutorial to setup gitolite and at some point the following ScriptAliasMatch is used: ScriptAliasMatch \ "(?x)^/(.*/(HEAD | \ info/refs | \ objects/(info/[^/]+ | \ [0-9a-f]{2}/[0-9a-f]{38} | \ pack/pack-[0-9a-f]{40}\.(pack|idx)) | \ git-(upload|receive)-pack))$" \ /var/www/bin/gitolite-suexec-wrapper.sh/$1 And the target script starts with USER=$1 So I am guessing this is used to forward the user name from apache to the suexec script (which indeed requires it). But I cannot see how this is done. The ScriptAliasMatch documentation makes me think that the /$1 will be replaced by the first matching group of the regexp before it. For me it captures from (?x)^/(.* to ))$ so there is nothing about a user here. My underlying problem is that USER is empty in my script so I get no authorizations in gitolite. I give my username to apache via a basic authentication: <Location /> # Crowd auth AuthType Basic AuthName "Git repositories" ... Require valid-user </Location> defined just under the previous ScriptAliasMatch. So I am really wondering how this is supposed to work and what part of the mechanism I missed so that I don't retrieve the user in my script.

    Read the article

  • Folder redirection GPO doesn't seem to be working

    - by user57999
    I've been trying to set up roaming profiles and folder redirection, but have hit a bit of a snag with the latter. This is exactly what I've done so far: (I have OU permissions and GPO permissions over my division's OU.) Created a group called Roaming-Users in the OU 'Groups' Added a single user (testuser) to the group Using the Group Policy Management tool (via RSAT on Windows 7) I right-clicked on the Groups OU and selected 'Create a GPO in this domain, and Link it here' Added my 'Roaming-Users' group to the Security Filtering section of the policy. Added the Folder Redirection option, specifically for Documents. It is set to redirect to: \myserver\Homes$\%USERNAME%\Documents (Homes$ exists and is sharing-enabled). Right-clicked on the policy under the Groups OU and checked Enforced. Logged into a machine as testuser successfully. Created a simple text file, saved some gibberish, logged off. Remoted into the server with Homes$ on it, noticed that the directory Homes$\testuser was created, but was empty. No text file to be found. From what I've read, I did everything I aught to...but I can't quite figure out the issue. I had no errors when I logged off about syncing issues (offline files is enabled) or anything, so I can only imagine my file should have ended up up on the share. Any ideas? EDIT: Using gpresult /R, I confirmed the user is in fact part of the Roaming-Users group, but does not have the policy applied, if that helps. EDIT 2: Apparently you can't apply GPOs to groups...so I applied to users and used the same security filter to limit it to my test user. Nothing happens as far as redirection goes, but I now have the following error in the event log: Folder redirection policy application has been delayed until the next logon because the group policy logon optimization is in effect

    Read the article

  • Did chkdsk make it harder to restore files?

    - by neyl
    My friend asked me to try and fix his loaded Sansa Clip + which wasn't playing. After opening it in MSC mode I discovered that the Music directory was empty and total of all files was only a few MB. However Disk properties showed me that it was 7Gb full. I then ran Tools - Error Checking and Windows dutifully informed me that disk was corrupt and I should run again Allowing Windows to Fix Errors. I did that and it told me everything was fixed and that all files were placed in FOUND.000 Dir. FOUND.000 was about 7.5 GB with FILE0000-1546 . CHK. (I am aware of methods like ChkBack to scan and convert to mp3 etc BUT Original filenames and structure needed!) Now I started getting worried that I made things worse! I have plenty of experience with Data Recovery Programs - Recuva, Restore My Files etc. and I was anyhow planning to use them to scan the drive. But NOW after CHKDSK "fixed" the drive maybe it modified critical FAT information vital for data recovery. So I run these programs and 0!!!. No trace of files! I tried a ton of Recovery Programs with same results TILL EaseUS Data Recovery Wizard found all files and I purchased program for $55! My Question In your opinion - did running CHKDSK with automatic fixing of errors make matters worse (i.e. many data recovery progs. didn't find a trace and they would have done if not for chkdsk) or was the filesystem too corrupt anyhow for regular File Recovery Progs.? If I would be a Professional - would I be responsible for running CHKDSK - automatic Fixing. Do you know of a better Data Recovery Program than EaseUs Data Recovery wizard - According to my experience I haven't found better!? Thanks

    Read the article

  • Can't mv files between directories on vsftpd

    - by frankyue
    I enabled this in vsftpd.conf chroot_local_user=YES chroot_list_enable=YES chroot_list_file=/etc/vsftpd.chroot_list user_config_dir=/etc/vsftpd_user_conf and here is the user set in vsftpd_user_conf dirctory ftpupload : local_root=/mnt/upload But /mnt/upload is mounted from another directory /mnt/upload on /opt/upload type none (rw,bind) Here is the list in /mn/upload rough_images/ shoes-pentland/ vendor-upload/ shooting/ Additional, the shooting/ directory is mounted from another place /mnt/upload/shooting on /mnt/shooting none (rw,bind) Now here is the problem. When I use the ftp client to move the files between the directories but failed .Files can moved between any directories except the shooting one. The permission is right . I can move any files between this directories successful by using su ftpupload. It means the vsftpd didn't support the mount bind? Here is the vsftpd.conf listen=YES anonymous_enable=NO local_enable=YES write_enable=YES local_umask=000 dirmessage_enable=YES use_localtime=YES xferlog_enable=YES connect_from_port_20=YES chown_uploads=YES chown_username=app xferlog_std_format=NO log_ftp_protocol=YES chroot_local_user=YES chroot_list_enable=YES chroot_list_file=/etc/vsftpd.chroot_list user_config_dir=/etc/vsftpd_user_conf ls_recurse_enable=YES secure_chroot_dir=/var/run/vsftpd/empty pam_service_name=vsftpd pasv_enable=YES pasv_max_port=*** pasv_min_port=*** port_enable=YES pasv_address=*** virtual_use_local_privs=YES tcp_wrappers=YES and here is the mtab: /mnt/upload /opt/upload none rw,bind 0 0 /mnt/upload/shooting /mnt/shooting none rw,bind 0 0 all of the permissions under the /mnt/upload are the same: drwxrwxrwx * ftpupload app

    Read the article

  • Can't display RSSI values in Wireshark

    - by Giovanni Soldi
    I am trying to analyze the up-link Wireless traffic generated by my Sony Ericsson phone and captured by my D-Link router, on which I installed the DD-WRT firmware. To do this, first I log in the router and enable the prism0 interface by typing the command: wl -i eth1 monitor 1 and then I start to capture the packets by typing: tcpdump -i prism0 ether src xx:xx:xx:xx:xx:xx -s0 -w /tmp/smbshare/sony_ericsson_test.pcap where xx:xx:xx:xx:xx:xx is the MAC address of my Sony Ericsson phone. After a while I transfer the sony_ericsson_test.pcap file to my computer and open it with Wireshark program. In order to display the RSSI values I follow this procedure: Edit - Preferences... - Columns - Press "Add" button - As "Field type" I choose "IEEE 802.11 RSSI" and finally I choose name "Power" and click on "Apply" button. The problem is that the column "Power" is empty with no RSSI values. Does Anyone has a clue on why are RSSI values not displayed? Maybe I am missing a passage. Looking forward to hearing from anyone of you! Thanks in advance for your help!

    Read the article

  • nginx & php-fpm and custom header

    - by nixer
    I would like to pass some custom header (ACCESS_TOKEN) from client RESTful application (JS) to application server (php-fpm). I had read that nginx should pass all http headers to php, but somehow it does not come to my php :( I can see it in firebug http://o7.no/N6DM7q but can't see it in $_SERVER variable. it just does not exist in $_SERVER array. I'm thinking that i need to pass it manually. Now my config looks like that: location @php-fpm { include /etc/nginx/fastcgi_params; fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_param REQUEST_URI /index.php$request_uri; fastcgi_param SCRIPT_FILENAME /htdocs/index.php; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param DOCUMENT_ROOT /htdocs; } } and when I add new line in location definition: location @php-fpm { include /etc/nginx/fastcgi_params; ... fastcgi_param ACCESS_TOKEN $http_access_token; } } or even if i will add it into fastcgi_params file it does not help :( if I put into location part next line: fastcgi_param ACCESS_TOKEN $http_access_token; then in php it has empty value :( how I can pass custom header from client to backend (php) via nginx ?

    Read the article

  • Samba Does Not List Files In Shared Directory

    - by Sean M
    I am running Samba on a CentOS server, and I am experiencing a problem where it allows me to connect to the server and see a share, but shows the share as an empty directory. I find this behavior strange. Here is the stanza in my smb.conf for the given share: [seanm] path = /home/seanm writeable = yes valid users = seanm, root read only = No Here's what I see on the server side: [seanm@server ~]$ ls -l -rw-r--r-- 1 seanm seanm 40 Jan 4 13:45 pangram.txt And yet: [seanm@client ~]$ smbclient //server/seanm -U seanm -W WORKGROUP Enter seanm's password: Domain=[WORKGROUP] OS=[Unix] Server=[Samba 3.0.33-3.29.el5_5.1] smb: \> ls . D 0 Fri Jan 7 10:08:55 2011 .. D 0 Fri Jan 7 07:58:31 2011 58994 blocks of size 262144. 50356 blocks available This behavior is present on both a Windows client and a Linux client system. The behavior is present with the firewall on and with the firewall off, so it's not that. Neither /var/log/messages nor /var/log/secure have any complaints about Samba. I doubt that SELinux is a problem: just in case, here are the relevant settings. [root@server ~]# getsebool -a | grep samba samba_domain_controller --> off samba_enable_home_dirs --> on samba_export_all_ro --> off samba_export_all_rw --> off samba_share_fusefs --> off samba_share_nfs --> off use_samba_home_dirs --> on virt_use_samba --> off What am I doing wrong here, and what can I do to fix it?

    Read the article

  • Linux: Send mail to external mail box from a server with that user's hostname

    - by dtbarne
    I've got sendmail running on a linux box. Let's say the hostname of the box is bar.com. If I run the following command, I don't receive the email (which is hosted with a third party), presumably due to the hostname pointing to the local machine. echo "Test Body" | mail -s "Test Subject" [email protected] Is there any way to get this to work so that I can receive emails at my third party email address even though it has the same hostname? Do I have to change the hostname of this server (not preferred)? It may be worth noting that I created a user "foo" on my machine and noticed that the mailbox for that account is empty. I noticed these log entries, which may or may not be relevant: Jun 28 01:09:48 bar sendmail[14338]: p5S59min014338: from=apache, size=80, class=0, nrcpts=1, msgid=<[email protected]>, relay=apache@localhost Jun 28 01:09:48 bar sendmail[14339]: p5S59mIA014339: from=<[email protected]>, size=293, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA, relay=localhost.localdomain [127.0.$ Jun 28 01:09:48 bar sendmail[14338]: p5S59min014338: [email protected], ctladdr=apache (48/48), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30080, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (p5S59mIA$ Jun 28 01:09:48 bar sendmail[14340]: p5S59mIA014339: to=<[email protected]>, ctladdr=<[email protected]> (48/48), delay=00:00:00, xdelay=00:00:00, mailer=local, pri=30495, dsn=2.0.0, stat=Sent

    Read the article

< Previous Page | 712 713 714 715 716 717 718 719 720 721 722 723  | Next Page >