Search Results

Search found 7593 results on 304 pages for 'dev e loper'.

Page 198/304 | < Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >

  • Weird behaviour with OpenVPN: can not connect to a few websites

    - by Gaby Solis
    My OpenVPN server is Ubuntu 10.04.4 LTS and openvpn version is 2.x My client is on Win 7. He can access most sites but not Youtube, Facebook, Twitter, groups.google.com, etc My server.conf is: local x.x.x.x port 1194 proto udp dev tun ca /etc/openvpn/keys/ca.crt cert /etc/openvpn/keys/server.crt key /etc/openvpn/keys/server.key dh /etc/openvpn/keys/dh1024.pem server 10.8.0.0 255.255.255.0 push "redirect-gateway def1" push "dhcp-option DNS 8.8.8.8" client-to-client keepalive 10 120 comp-lzo persist-key persist-tun status /etc/openvpn/keys/openvpn-status.log verb 4 I can access Youtube etc using SSH Tunnel + SOCKS Proxy, and the Ubuntu server can access all sites. so nothing is wrong with the Ubuntu server. With little information I can provide, I am not looking for a quck solution. How can I debug?

    Read the article

  • How to prevent unison syncronize file when file process uploading

    - by user134600
    I use CentOS 5.8 Final. My situation is I running unison with cron where script below : */1 * * * * /usr/bin/unison /dev/null 2&1 and default profile like below : root = /var/www root = ssh://web02.example.com//var/www auto=true batch=true confirmbigdel=true fastcheck=true group=true owner=true prefer=newer silent=true times=true So in every minutes will syncronized www folder . My problem are : I upload file with size bigger than 10 MB to www from client with user1 permission where www folder is user1 owner. file in processing uploading then unison running in that minute and suddenly file upload owner changed to root:root When I editing file in www folder then I save when unison running, file owner changed to root:root where should be user1:user1 Is there anyone know about this problem?

    Read the article

  • Perforce Proxy Server: Caching selective files [closed]

    - by fbrereto
    I just set up a Perforce proxy server for work. I'm noticing the cache directory is filling up very quickly -- with files I know I will never need. For example, there is a 'sandbox' directory in the depot where users keep personal branches and other work; a p4 sync is causing the p4 proxy cache to grab these user's sandboxes when I'll never need them. I would create a symbolic link for the sandbox directory to /dev/null but then I wouldn't be caching my sandbox, which I am interested in. Is there any way to tell the perforce proxy something to the effect of "if I haven't had to sync it, please don't cache it?"

    Read the article

  • Does the advanced format tool bundled by manufacturers actually do anything which mkntfs doesn't?

    - by neurolysis
    I recently bought a new drive (specifically, a 2TB Samsung Spinpoint) that says on the label that it supports advanced format, and that I should download the tool from their site. Unless I'm missing something, mkntfs has always had its maximum sector size at 4096b: -s, --sector-size BYTES Specify the size of sectors in bytes. Valid sector size values are 256, 512, 1024, 2048 and 4096 bytes per sector. If omitted, mkntfs attempts to determine the sector-size automatically and if that fails a default of 512 bytes per sector is used. Will this tool on Samsung's site do anything other than format the drive in the same way doing mkntfs -s 4K /dev/sdb1 would do? To be specific, I'm intending to use this drive on a machine that will primarily run Windows XP, but I'd rather boot into Linux/BSD and format the disk manually than have bloated software. I do want to have the new AF style sectors though -- that's essential. So if I did the command above, would it have exactly the same effect as using the advanced format tool?

    Read the article

  • CentOS - dual boot from new partition

    - by Dima
    I need to install two copies of the CentOS 5.5 (bank A and bank B) on different partitions of the same hard disk and install grub boot loader to another partition (visible from both banks). The boot loader should redirect the boot menu to bank A or bank B (according to the configuration). The new partition is mounted to /common_partition and grub is installed on it using following command: grub-install /dev/hda In the new partition I'm created the following menu.lst file: title BOOTCONTROL REDIRECT : PLEASE WAIT root (hd0,1) configfile /boot/menu.lst boot On my setup: both partitions (bank A and bank B) are primary and grub is installed on MBR. The problem is: but the new boot loader (on common_partition) did not load. What wrong on my configuration?

    Read the article

  • Need Help with fixing permissions in mounted Drive

    - by Master
    I am trying a lot still my problem is not solved. I have a partion called Server and inside it i have 5 folders like Folder 1 FOlder 2 Folder 3 I am mounting the drive on startup by using following command as told to me by some senoir members and it works but with some problems /dev/sdb1 /media/Server ntfs defaults,umask=006,fmask=000,dmask=007,uid=1000,gid=1001 0 0 The problem is with this command the permission are applied to all folders like Folder 1 , Folder 2 , FOlder3 But i want that only FOlder 3 should be publicly readable and writable while all other should be private and no one should have access to that. How can i achieve that

    Read the article

  • debian - running unattended-upgrades on a particular day of the week

    - by dastra
    We're running unattended-upgrades on debian squeeze, and would like it to run once a week, only on a Wednesday morning. To attempt this, we have set: APT::Periodic::Unattended-Upgrade "7" in /etc/apt/apt.conf.d/50unattended-upgrades And then touched the /var/lib/apt/periodic/update-stamp to set the timestamp to a Wednesday, for instance: touch -t 201211280000 /var/lib/apt/periodic/update-stamp Running: stamp=$(date --date=$(date -r /var/lib/apt/periodic/update-stamp --iso-8601) +%s 2/dev/null) date -u --date="1970-01-01 $stamp sec GMT" Gives the correct timestamp: Wed Nov 28 00:00:00 UTC 2012 However, unattended-upgrades then seems to ignore this, and run the updates on a Saturday morning. Could anyone enlighten me as to how this parameter works, and how to set up upgrades to run on a Wednesday?

    Read the article

  • puppet service not stopping service

    - by Gregg Leventhal
    notice ("This should be echoed") service { "iptables": ensure => "stopped", } This does not stop iptables, I am not sure why. service iptables stop works fine. Puppet 2.6.17 on CentOS 6.3. UPDATE: /etc/puppet/manifests/nodes.pp node 'linux-dev' { include mycompany::install::apache::init include mycompany::config::services::init } /etc/puppet/modules/mycompany/manifests/config/services/init.pp class mycompany::config::services::init { if ($::id == "root") { service { 'iptables': #name => '/sbin/iptables', #enable => false, #hasstatus => true, ensure => stopped } notice ("IPTABLES is now being stopped...") file { '/tmp/puppet_still_works': ensure => 'present', owner => root } else { err("Error: this manifest must be run as the root user!") } }

    Read the article

  • Difference between sending data via UDP in Bash and with a Python script

    - by Kevin Burke
    I'm on a Centos box, trying to send a UDP packet to port 8125 on localhost. When I run this Python script: import socket sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) sock.sendto('blah', ("127.0.0.1", 8125)) The data appears where it should on port 8125. However when I send the data like this: echo "blah" | nc -4u -w1 127.0.0.1 8125 Or like this: echo "blah" > /dev/udp/127.0.0.1/8125 The data does not appear in the backend. I know this is horribly vague but it's UDP and it's hard to determine why one packet is being sent and the other is not. Do you have any ideas about how to debug this issue further? I'm on a Centos machine.

    Read the article

  • google sitemap generator installation selinux

    - by adnan
    when i trying to install google sitemap generator i received this error Change security context of to system_u:object_r:httpd_modules_t install: WARNING: ignoring --context (-Z); this kernel is not SELinux-enabled Program files successfully copied. ./install.sh: line 488: 14284 Segmentation fault "$DEST_DIR/$BIN_DIR/$DAEMON_BIN" update_setting $update_setting_flags "apache_conf=$APACHE_CONF" "apache_group=$APACHE_GROUP" > /dev/null after choosing the submiting file settings i tried to unistall it & excute this getenforce try again but the same problem when i enter this dir /etc/sysconfig/selinux. it is not contain the selinux file my os centos 6 X86_64

    Read the article

  • Rolling Back Microsoft CRM during testing

    - by npeterson
    Process related question: Currently we have a multi-tenant installation of MS CRM 4.0 on three servers, Dev, Test, and Live. We are actively working on customizing one of the tenants, but the others are static. During user testing, we often find it necessary to 'start fresh' in one of the tenants. Is it better to try and delete out the changes from the tenant (created accounts, leads, etc), or just revert the database to a backup from before the testing started? Is there compelling reasons why bulk delete is not advisable for MSCRM or that reverting the database frequently could cause issue?

    Read the article

  • Using physical disk with VMware Workstation

    - by chx
    I am using VMware Workstation 9.0 under Windows 7 and trying to load my Linux from Physicaldisk0. And it boots, grub sees the two partitions on the disk (I checked in command line) and the kernel and the initrd loads and then it stops saying "device not found"and drops me into an emergency shell. Indeed there is absolutely nothing in /dev not the /sda device it expects not /hda nothing that looks like a disk. Edit: I can boot the Linux disk just fine if I boot it from BIOS and not as a VM. Edit2: The question is, how can I make this setup work?

    Read the article

  • How to execute programs on mounted partition

    - by DevNoob
    This is the aplication I want to run. -rwxr-xr-x 1 manuel manuel 582841 Nov 22 09:51 PromServerMain This is the fstab entry /dev/sda8 /media/data0 ext4 defaults,user 0 2 This is the mountpoint lrwxrwxrwx 1 manuel manuel 5 Nov 16 14:23 data -> data0 drwxrwxr-x 9 manuel manuel 4096 Nov 22 09:26 data0 This is what I get manuel@P5KC /media/data/Projekte/PromServer/src $ ./PromServerMain bash: ./PromServerMain: Keine Berechtigung manuel@P5KC /media/data/Projekte/PromServer/src $ sudo ./PromServerMain sudo: unable to execute ./PromServerMain: Permission denied Even as root. I have no clue whats wrong. Any suggestions? System is Debian Wheezy Xfce.

    Read the article

  • Is it possible to use SELinux MCS permissions with Samba?

    - by Yuri
    Created a user1: adduser --shell /sbin/nologin --no-create-home user1 passwd user1 smbpasswd -a user1 smbpasswd -e user1 semanage login -a -s "unconfined_u" -r "s0-s0:c0" user1 Added a category c0 for the folder ./123 inside the Samba share chcat s0:c0 /share/123/ After that the user1 can't go into this folder: type=AVC msg=audit(1332693158.129:48): avc: denied { read } for pid=1122 comm="smbd" name="123" dev=sda1 ino=786438 scontext=system_u:system_r:smbd_t:s0 tcontext=unconfined_u:object_r:samba_share_t:s0:c0 tclass=dir But if remove the c0 category: restorecon -v /share/123/ user1 opens folder with no problem. Is I'm doing something wrong or Samba doesn't support SELinux MCS? Have installed on CentOS 6.2 are: samba3.i686 3.6.3-44.el6 @sernet-samba selinux-policy.noarch 3.7.19-126.el6_2.10 @updates selinux-policy-targeted.noarch 3.7.19-126.el6_2.10 @updates

    Read the article

  • How to perform diagnostics (stress test) on HP Smartarray Controller

    - by pepoluan
    At my office, we have a server that we suspect its RAID controller (HP Smartarray) is failing. A cold boot, however, does not indicate anything. Can anyone recommend me a method to stress-test the controller? Symptoms that makes me suspect a failing controller: Disk access getting slower, queue getting longer Running dmesg on the XenServer console I see many messages similar to this one: end_request: I/O error, dev tda, sector 253655584 (the sector number is never the same) When we move the VM to another physical host, we no longer see the above message Running idle (without any running VM), the dmesg no longer emit the above message A search on Google indicated that the above message is most commonly associated with a failing SmartArray controller. How can I be sure that the SmartArray controller is failing?

    Read the article

  • Authentication in Apache2 with mod_dav_svn

    - by Poita_
    I'm having some trouble setting up authentication in Apache2 for a SVN repository that's being served using mod_dav_svn. Here is my Apache config for the directory: <Location /svn> DAV svn SVNParentPath /var/svn/repos AuthType Basic AuthName "Subversion Repository" AuthUserFile /etc/apache2/dev.passwd Require valid-user </Location> I can use svn with the projects under /var/svn/repos, so I know that the DAV is working, but when I do svn updates or commits (or anything), Apache doesn't ask for any authentication... It does the exact same thing whether the Auth directives are there or not. The permissions on the repository directory (and all subdirectories/files) only give permission to www-data (the Apache2 user/group). I have also ensured that all relevant modules are enabled (in particular mod_auth is enabled, as are all mod_dav* modules). Any ideas why svn commands aren't authenticating? Thanks in advance.

    Read the article

  • SpeedTracer NETWORK_RESOURCE_RESPONSE vs NETWORK_RESOURCE_FINISH

    - by Ben Flynn
    I'm using SpeedTracer with GoogleChrome to measure the load times of requested resources. The SpeedTracer site says: NETWORK_RESOURCE_RESPONSE "Indicates that the renderer has started receiving bits from the resource loader" NETWORK_RESOURCE_FINISH "Indictes a resource load is successful and complete." In my mind that means we would always see a network resource response (bytes are arriving) before we see a finish (all bytes received). This doesn't seem to be the case at all. Here is a sample: Request Timing @33519ms for 926ms Response Timing @34445ms for -847ms Total Timing @33519ms for 78ms I'm guessing response time isn't supposed to be negative. Can someone explain this or is this a bug? I'm using Chrome 10.0.612.3 dev with a SpeedTracer I downloaded today.

    Read the article

  • MySQL replicate multiple places

    - by Frederik Nielsen
    Very trick task to find a good title for this question, but here goes the q: I have a few development machines, where I develop my PHP applications on, and testing via a local webserver. This works out pretty well for each machine. However, I would like to replicate the DB from my machines to a central location. So, to sum up: DEV1 - CENTRAL DEV2 - CENTRAL DEV3 - CENTRAL CENTRAL - DEV1 CENTRAL - DEV2 CENTRAL - DEV3 I hope this makes sense, as I cannot find an easy way to tell it. Basically, it is a 2-way replication, where all 4 databases contain the same info, and each of them can be updated locally, to then be pushed out to the others. Is this actually doable? All my dev machines are running Windows 7, and my central DB server is running CentOS 6.

    Read the article

  • permission denied when trying to execute a binary I burned to a CD-R

    - by user16654
    On a UBUNTU karmic machine, I burned a cd from the command prompt using: cdrecord -v speed=16 dev=0,1,0 /FPS.iso The CD now contains an executable and some files. I tested the cd by loading it onto another machine (Red Hat 5.3) and when I try to run the program I get the following message: bash: ./FPS1_1: Permission denied I can open other files like text documents (the executable also comes with shared libraries). I realized I had burned the cd as root so I burned another one as another user but I still got the same problem. How can I remove this permission or what is the problem? P.S. the image was in / if that helps

    Read the article

  • Logrotate, is this a proper config for what I want to do?

    - by Felthragar
    I started using logrotate a few days ago on a new server setup (actually three of them). My config is as follows. /var/www/mywebsite.com/logs/*.log { rotate 14 daily dateext compress delaycompress sharedscripts postrotate /usr/sbin/apache2ctl graceful > /dev/null endscript } Problem is that this is putting several days of logs into the same file. For example, I've currently got a file called access.log-20121005 which has logs for Oct 3rd, Oct 4th and Oct 5th in it. Is that proper behaviour? What I want for it to do is to create one logfile for each day and keep 14 days of logs. Any help appreciated, thanks.

    Read the article

  • How do you set up DNS in Window Server 2008 in a Hyper-V environment?

    - by Nathan DeWitt
    I have a laptop running Server 2008 and Hyper-V. I have created a virtual machine that is also running Server 2008, that I used dcpromo to create as a domain controller. I disabled IPv6 because I had no idea how to enter a default address, and I just wanted to make a standalone MOSS dev environment. I have tried every combination of creating a virtual network on the host and then connecting to that in the VM, but I can't get the VM to communicate with the host and vice versa. No pinging, no copy and paste, nothing. Thanks. To update: My VM (which is its own DC) currently does not have a static IP. When I set the IP to static, I could not find anything that would let it talk to the host machine.

    Read the article

  • Default gateway is in different subnet. How to configure in RHEL6.2

    - by Dmytro Leonenko
    I have two subnets routed to my server from ISP. I have only one gateway ip. The gateway is on the same VLAN as my IP address. For example netowrk 1 is 1.0.0.0/24 and network 2 is 2.0.0.0/24. Both are routed to eth0 by my ISP. Gateway is 1.0.0.1. My host ip is 2.0.0.1/24 (eth0) So I can configure default gateway manually with ip route add default dev eth0 ip route add default via 1.0.0.1 and then internet connection works properly. How do I configure it in /etc/sysconfig/network-scripts/ifcfg-eth0 ? I tried to set GATEWAY=1.0.0.1 but it doesn't work. Tried to set GATEWAY and GATEWAYDEV in /etc/sysconfig/network and it does only what first command from listing above do.

    Read the article

  • Unmounted root partition

    - by Jack
    My server running Debian lenny has just had a power cut recently and its come back up with the root partition in read only mode. I tried to remount the filesystem in read write mode with mount -n -o remount,rw / which then gave the output mount: block device /dev/hda1 is write-protected, mounting read-only. But now the root filesystem isn't mounted at all so I can't run anything to mount the partition again or any other command for that matter such as shutdown because /bin/ isn't there. Is there anything I can do remotely?

    Read the article

  • Installing Linux from External Card Reader

    - by Subhamoy Sengupta
    I have this problem. I was experimenting if I could use a memory card (SDHC) as an USB drive for all intents and purposes, and when I put the card in an USB card reader, I can use it just like regular USB stick and it also shows up in the BBS popup menu as an USB stick. When I tried to create an installation media out of it like this: sudo dd if=/path/to/image of=/dev/sdb And tried to boot from it, simply nothing happened. Cursor blinked a couple times, and jumped to the GRUB of my pre-existing GNU/Linux installation. What am I missing here? Is this not doable? I tried this with Xubuntu 12.04 and ArchLinux, by the way. I have also tried UNetBootIn instead of dd. Nothing happened differently.

    Read the article

  • haproxy access list using path_dir having issues with firefox

    - by user11243
    I'm trying to route all requests containing a path directory of /socket.io/ to a separate port with HAProxy. Here is my config file: global maxconn 4096 # Total Max Connections. This is dependent on ulimit nbproc 2 defaults mode http frontend all 0.0.0.0:80 timeout client 86400000 default_backend web_servers acl is_stream path_dir socket.io use_backend stream_servers if is_stream backend web_servers balance roundrobin option forwardfor # This sets X-Forwarded-For timeout server 30000 timeout connect 4000 server web1 127.0.0.1:4000 weight 1 maxconn 1024 check backend stream_servers balance roundrobin option forwardfor # This sets X-Forwarded-For timeout queue 5000 timeout server 86400000 timeout connect 86400000 server stream1 127.0.0.1:5100 weight 1 maxconn 1024 check URL paths with a /socket.io/ get correctly directed to port 5100 in chrome and safari. However not for firefox. I'm running Haproxy locally on my mac for dev, not sure if it has anything to do with it. I'm using haproxy 1.4.8 and Firefox 3.6.15. I've tried clearing cache on firefox and it didn't help, so I'm thinking there's something wrong with the way HAProxy parses through the Firefox request headers.

    Read the article

< Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >