Search Results

Search found 41561 results on 1663 pages for 'linux command'.

Page 500/1663 | < Previous Page | 496 497 498 499 500 501 502 503 504 505 506 507  | Next Page >

  • Enrich a dataset of POIs with OpenStreetMap

    - by zero
    update: due to some hints of users - eg oliver Salzburg and slhck i have been aware of gis.stackexchange.com - so i moved the topic on my own: Plz can you or somebody who has the permission close the article - since we do not need this topic on two sites. Thx for your work. KEEP up the service here! STACK-sites rock. I have a list of POIs, some with a full description and some with only a few data entries, like the following: 6.9441000 50.9242000 [50677] (Ital) Casa di Biase [Köln] 6.9373600 50.9291800 [50674] (Ital) Al Setaccio [Köln] However, I need the full dataset. Can I get this somewhere? If I have all the position data, is it possible to find the rest? a. name of the street b. name of the town So for example, the data should finally look like this: 10.5346100 52.1613600 [38300] (Chin) Wanbao Kommissstr.9 [Wolfenbüttel] 13.2832500 52.4422600 [14167] (Ital) LaPergola Unter den Eichen 84d [Berlin] 13.3177700 52.5062900 [10625] (Chin) Good Friends Kantstr.30 [Berlin] Can I do this with OpenStreetMap? Should I parse OpenStreetMap data? Or OpenBabel?

    Read the article

  • For enabling SSL for a single domain on a server with muliple vhosts, will this configuration work?

    - by user1322092
    I just purchased an SSL certificate to secure/enable only ONE domain on a server with multiple vhosts. I plan on configuring as shown below (non SNI). In addition, I still want to access phpMyAdmin, securely, via my server's IP address. Will the below configuration work? I have only one shot to get this working in production. Are there any redundant settings? ---apache ssl.conf file--- Listen 443 SSLCertificateFile /home/web/certs/domain1.public.crt SSLCertificateKeyFile /home/web/certs/domain1.private.key SSLCertificateChainFile /home/web/certs/domain1.intermediate.crt ---apache httpd.conf file---- ... DocumentRoot "/var/www/html" #currently exists ... NameVirtualHost *:443 #new - is this really needed if "Listen 443" is in ssl.conf??? ... #below vhost currently exists, the domain I wish t enable SSL) <VirtualHost *:80> ServerAdmin [email protected] ServerName domain1.com ServerAlias 173.XXX.XXX.XXX DocumentRoot /home/web/public_html/domain1.com/public </VirtualHost> #below vhost currently exists. <VirtualHost *:80> ServerName domain2.com ServerAlias www.domain2.com DocumentRoot /home/web/public_html/domain2.com/public </VirtualHost> #new -I plan on adding this vhost block to enable ssl for domain1.com! <VirtualHost *:443> ServerAdmin [email protected] ServerName www.domain1.com ServerAlias 173.203.127.20 SSLEngine on SSLProtocol all SSLCertificateFile /home/web/certs/domain1.public.crt SSLCertificateKeyFile /home/web/certs/domain1.private.key SSLCACertificateFile /home/web/certs/domain1.intermediate.crt DocumentRoot /home/web/public_html/domain1.com/public </VirtualHost> As previously mentioned, I want to be able to access phpmyadmin via "https://173.XXX.XXX.XXX/hiddenfolder/phpmyadmin" which is stored under "var/www/html/hiddenfolder"

    Read the article

  • Configure all hosts, then create a list of the config for all hosts?

    - by AME
    I deployed a huge number of hosts with Ansible - which did work very nice. Each host got its individual settings and configuration. Now I'd like to generate a config file for another system that uses these hosts. For it, I need for every host a part of the generated configuration (the one that configures the database). Here is an example of the situation with two hosts having different configuration and the other system that uses a part of the Ansible-generated configuration: host1 ansible configured dbA host2 ansible configured dbQ The other system: host1 = dbA host2 = dbQ The values are computed differently (dbQ instead of dbB for host2 for example) if it belongs in a different cluster and so on, making it unpractical to just read out host configuration from the host_vars. I believe I would need to iterate over the hosts and let Ansible figure out the computed values for the variables like it would when deploying, but I do not know how to put that result in one template. Please advise :)

    Read the article

  • GDM login screen is not displayed with VNC

    - by niboshi
    Hi, I set up VNC server with xinetd. Also configured GDM so that XDMCP is enabled. VNC connection seems okay, but GDM login screen is not shown. Instead I can only see old bare X screen (gray meshed background and X-shaped mouse pointer), which I can't do any interaction with it. What can I do to fix the problem? No log is written below /var/log/. Server distribution: Ubuntu marverick /etc/xinetd.d/vnc is like below: service vnc1024 { disable = no socket_type = stream protocol = tcp wait = no user = nobody server = /usr/bin/Xvnc server_args = -inetd -query localhost -geometry 1024x768 -depth 24 -once securitytypes=none port = 12345 } /etc/gdm/custom.conf: [daemon] [security] DisallowTCP=false [xdmcp] Enable=true [gui] [greeter] [chooser] [debug] [servers] /etc/services is also configured. Thanks

    Read the article

  • How do I reference the value of a constructed environment variable in a loop?

    - by Rob Spieldenner
    What I'm trying to do is loop over environment variables. I have a number of installs that change and each install has 3 IPs to push files to and run scripts on, and I want to automate this as much as possible (so that I only have to modify a file that I'll source with the environment variables). The following is a simplified version that once I figure out I can solve my problem. So given in my.props: COUNT=2 A_0=foo B_0=bar A_1=fizz B_1=buzz I want to fill in the for loop in the following script #!/bin/bash . <path>/my.props for ((i=0; i < COUNT; i++)) do <script here> done So that I can get the values from the environment variables. Like the following(but that actually work): echo $A_$i $B_$i or A=A_$i B=B_$i echo $A $B returns foo bar then fizz buzz

    Read the article

  • Maximizing TCP connections on HAProxy load balancer

    - by imaginative
    I am currently using HAProxy in order to load balance tcp connections from clients to my Erlang app server. The connection is persistent, which means I'm limited to roughly 64K clients on an optimized server (I'm currently running HAProxy on an m1.large EC2 instance). My app server is designed to horizontally scale based on the number of TCP connections. What's worrying me though is I'll need an equal number of HAProxy servers as app servers since it's a 1:1 connection. Is there currently a way to "proxy" the tcp connection to the app server so that once HAProxy sends the client off to my Erlang server, it can free up the connection, ready to serve another client? Are there any papers, existing solutions out there I can read so that I only have to worry about the 64K limit on my app servers, and not on the load balancing servers themselves?

    Read the article

  • terminal tools and logs for debugging TCP issues

    - by kellogs
    I have a server which I am testing for functionality (not load, not stress) with tsung. 50 users / second, 100 total users. Judging from tsung (tsung is the testing framework) graphs, there TCP connections (red line) drops to 0 while the commenced user sessions (green line) does not. Server logs show nothing to be gripping onto, so I am speculating some kind of TCP issue. Should this be the case ? Where would I look further on the server, any logs / tools to be looking at ? Only SSH available, no GUI. > root@XMPP:~# cat /etc/lsb-release > DISTRIB_ID=Ubuntu > DISTRIB_RELEASE=11.10 > DISTRIB_CODENAME=oneiric > DISTRIB_DESCRIPTION="Ubuntu 11.10" Thank you

    Read the article

  • samba joined to AD canot see users when in the security tab on client

    - by Jonathan
    I've got samba joined via kerberos and winbindd to our AD network and user authentication and everything else is working great. However when I try to add users/groups to file permissions it tells me they are not found. All the users groups show up fine with getent so I'm not sure why they are not showing up. Here is my smb.conf and I would much appreciate any help with this. #GLOBAL PARAMETERS [global] socket options = TCP_NODELAY IPTOS_LOWDELAY SO_KEEPALIVE SO_RCVBUF=11264 SO_SNDBUF=11264 workgroup = [hidden] realm = [hidden] preferred master = no server string = xerxes web/file server security = ADS encrypt passwords = yes log level = 3 log file = /var/log/samba/%m max log size = 50 printcap name = cups printing = cups winbind enum users = Yes winbind enum groups = Yes winbind use default domain = Yes winbind nested groups = Yes winbind separator = + winbind refresh tickets = yes idmap uid = 1600-20000 idmap gid = 1600-20000 template primary group = "Domain Users" template shell = /bin/bash kerberos method = system keytab nt acl support = yes [homes] comment = Home Direcotries valid users = %S read only = No browseable = No create mask = 0770 directory mask = 0770 force create mode = 0660 force directory mode = 2770 inherit owner = no [test] comment = Test path=/mnt/test writeable=yes valid users = %s create mask = 0770 directory mask = 0770 force create mode = 0660 force directory mode = 2770 inherit owner = no [printers] comment = All Printers path = /var/spool/cups browseable = no printable = yes

    Read the article

  • Basic clarification about Limited FTP/sFTP users

    - by mattewre
    I would like to get some clarification about the correct way to create limited users to access to my VPS user as WEBSERVER with Nginix. I'm used to NOT install FTP and access via SFTP only. It is ok for every set up? this is what I usually do from to create a limited user called "admin" that should be able to have access via SFTP to the folder with the website data mkdir -p /var/www/mysite.com/ adduser admin adduser admin www-data chown -R root:root /var/www chmod -R 755 /var/www chmod -R 755 /var/www/mysite.com chown -R admin:www-data /var/www/mysite.com/ It seems not to be the correct way, I always have problems with permission when I upload some files (for example with Wordpress in general). I would like to create an user that does work exactly as the one that the "provides" give to their client when they buy an Hosting service (that is a FTP, I would prefer SFTP access). It is for personal user, but I think that a limited user is a lot safer to use then the "root" via SFTP.

    Read the article

  • How can I restrict SSH access when the source IP is dynamic

    - by Supratik
    Hi I want to protect SSH access to our live web server from all IP's except our office static IP. There are some employees who connects to this live server from their dynamic IP's. So, it is not always possible for me to change in the iptables rule in live server whenever the dynamic IP of the employee changes. I tried to put them in office VPN and allowed only SSH access from office IP but the office connection is slow in compared to our employee's private internet connection, moreover it adds an extra overhead to our office network. Is there any way I can solve this problem ?

    Read the article

  • How to run KDM or GDM over ssh

    - by Xolve
    I have a computer on LAN running ssh. I can normally tunnel the GUI application using ssh computer-name -X program-name But I wam my full desktop to be running on a remote computer using ssh so that I can just use that computer remotely like a local desktop. For this I think I will need to run KDM (or GDM ) remotely, what configuration do I need to do to make this happen?

    Read the article

  • Can a 32-bit RHEL4 userland work with a 64-bit kernel?

    - by James
    Is there a way to change an i386 RHEL4 machine to run an amd64 kernel, but ensure that it still builds software into same i386 binaries? On Debian this seems quite straightforward: just install an amd64 kernel (worst case, build one like this guy: http://www.debian-administration.org/users/jonesy/weblog/1) and prefix everything with "linux32". Then everything that considers uname -m will be unchanged, I just need to handle the few cases that consider uname -r. What is the Red Hat equivalent? Is the only way a full 64-bit installation on another disk and then chrooting back to the 32-bit system before anyone builds anything? (Even the best examples of that seem to be Debian-based.) Background: We make a large system that runs on (a variant of) i386 RHEL4. However, some of the larger RHEL build machines now have enough RAM that they might benefit from going 64-bit (for the kernel and maybe some of the bigger build steps). Our build system doesn't support cross-compilation.

    Read the article

  • Ubuntu - Automatically mount external drives to /media/LABEL on boot without a user logged in?

    - by endolith
    This question is similar, but kind of the opposite of what I want. I want external USB drives to be mounted automatically at boot, without anyone logged in, to locations like /media/<label>. I don't want to have to enter all the data into fstab, partially because it's tedious and annoying, but mostly because I can't predict what I'll be plugging into it or how the partitions will change in the future. I want the drives to be accessible to things like MPD, and available when I log in with SSH. gnome-mount seems to only mount things when you are locally logged into a Gnome graphical session.

    Read the article

  • Extremely high mysqld CPU usage with no active queries

    - by RadarNyan
    I have a VPS running Ubuntu 12.04 LTS with LEMP stack, followed the guide from Linode Library (since I'm using a Linode) to setup, and everything worked fine until now. I don't know what's wrong, but my CPU usage just goes up since a week ago. Today things getting really bad - I got 74% CPU usage so I went check and found that mysqld taking too much CPU usage (somewhere around 30% ~ 80%) So I did some Google Search, tried disable InnoDB, restart mysql, reset ntp / system clock (Isn't this bug supposed to happen more than a year ago?!) and reboot my VPS, nothing helped. Even with mysql processlist empty, I still get mysqld CPU usage very high. I don't know what I missed and have totally no idea, any advice would be appreciated. Thanks in advance. Update: I got these from running "strace mysqld" write(2, "InnoDB: Unable to lock ./ibdata1"..., 44) = 44 write(2, "InnoDB: Check that you do not al"..., 115) = 115 select(0, NULL, NULL, NULL, {1, 0}^[[A^[[A) = 0 (Timeout) fcntl64(3, F_SETLK64, {type=F_WRLCK, whence=SEEK_SET, start=0, len=0}, 0xbfa496f8) = -1 EAGAIN (Resource temporarily unavailable) hum... I did tried to disable InnoDB and it didn't fix this problem. Any idea? Update2: # ps -e | grep mysqld 13099 ? 00:00:20 mysqld then use "strace -p 13099", the following lines appears repeatedly: fcntl64(12, F_GETFL) = 0x2 (flags O_RDWR) fcntl64(12, F_SETFL, O_RDWR|O_NONBLOCK) = 0 accept(12, {sa_family=AF_FILE, NULL}, [2]) = 14 fcntl64(12, F_SETFL, O_RDWR) = 0 getsockname(14, {sa_family=AF_FILE, path="/var/run/mysqld/mysqld.sock"}, [30]) = 0 fcntl64(14, F_SETFL, O_RDONLY) = 0 fcntl64(14, F_GETFL) = 0x2 (flags O_RDWR) setsockopt(14, SOL_SOCKET, SO_RCVTIMEO, "\36\0\0\0\0\0\0\0", 8) = 0 setsockopt(14, SOL_SOCKET, SO_SNDTIMEO, "<\0\0\0\0\0\0\0", 8) = 0 fcntl64(14, F_SETFL, O_RDWR|O_NONBLOCK) = 0 setsockopt(14, SOL_IP, IP_TOS, [8], 4) = -1 EOPNOTSUPP (Operation not supported) futex(0xb786a584, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0xb786a580, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1 futex(0xb7869998, FUTEX_WAKE_PRIVATE, 1) = 1 poll([{fd=10, events=POLLIN}, {fd=12, events=POLLIN}], 2, -1) = 1 ([{fd=12, revents=POLLIN}]) er... now I totally don't get it x_x help

    Read the article

  • Causes of sudden massive filesystem damage? ("root inode is not a directory")

    - by poolie
    I have a laptop running Maverick (very happily until yesterday), with a Patriot Torx SSD; LUKS encryption of the whole partition; one lvm physical volume on top of that; then home and root in ext4 logical volumes on top of that. When I tried to boot it yesterday, it complained that it couldn't mount the root filesystem. Running fsck, basically every inode seems to be wrong. Both home and root filesystems show similar problems. Checking a backup superblock doesn't help. e2fsck 1.41.12 (17-May-2010) lithe_root was not cleanly unmounted, check forced. Resize inode not valid. Recreate? no Pass 1: Checking inodes, blocks, and sizes Root inode is not a directory. Clear? no Root inode has dtime set (probably due to old mke2fs). Fix? no Inode 2 is in use, but has dtime set. Fix? no Inode 2 has a extra size (4730) which is invalid Fix? no Inode 2 has compression flag set on filesystem without compression support. Clear? no Inode 2 has INDEX_FL flag set but is not a directory. Clear HTree index? no HTREE directory inode 2 has an invalid root node. Clear HTree index? no Inode 2, i_size is 9581392125871137995, should be 0. Fix? no Inode 2, i_blocks is 40456527802719, should be 0. Fix? no Reserved inode 3 (<The ACL index inode>) has invalid mode. Clear? no Inode 3 has compression flag set on filesystem without compression support. Clear? no Inode 3 has INDEX_FL flag set but is not a directory. Clear HTree index? no .... Running strings across the filesystems, I can see there are what look like filenames and user data there. I do have sufficiently good backups (touch wood) that it's not worth grovelling around to pull back individual files, though I might save an image of the unencrypted disk before I rebuild, just in case. smartctl doesn't show any errors, neither does the kernel log. Running a write-mode badblocks across the swap lv doesn't find problems either. So the disk may be failing, but not in an obvious way. At this point I'm basically, as they say, fscked? Back to reinstalling, perhaps running badblocks over the disk, then restoring from backup? There doesn't even seem to be enough data to file a meaningful bug... I don't recall that this machine crashed last time I used it. At this point I suspect a bug or memory corruption caused it to write garbage across the disks when it was last running, or some kind of subtle failure mode for the SSD. What do you think would have caused this? Is there anything else you'd try?

    Read the article

  • Fluxbox startup file not working

    - by Jack
    I am placing apps into my fluxbox startup file as per the instructions, however nothing starts up except fluxbox. It doesn't matter what app I try, so it isn't an app problem. here is my startup file: #!/bin/sh # # fluxbox startup-script: # # Lines starting with a '#' are ignored. # Change your keymap: xmodmap "/home/josh/.Xmodmap" # Applications you want to run with fluxbox. # MAKE SURE THAT APPS THAT KEEP RUNNING HAVE AN ''&'' AT THE END. tint2 & tilda & # And last but not least we start fluxbox. # Because it is the last app you have to run it with ''exec'' before it. exec fluxbox # or if you want to keep a log: # exec fluxbox -log "/home/josh/.fluxbox/log" I have also tried tests such as "touch ~/testwoked" and such, nothing works. It makes no difference if the file is executable or not.

    Read the article

  • Is it possible to use SELinux MCS permissions with Samba?

    - by Yuri
    Created a user1: adduser --shell /sbin/nologin --no-create-home user1 passwd user1 smbpasswd -a user1 smbpasswd -e user1 semanage login -a -s "unconfined_u" -r "s0-s0:c0" user1 Added a category c0 for the folder ./123 inside the Samba share chcat s0:c0 /share/123/ After that the user1 can't go into this folder: type=AVC msg=audit(1332693158.129:48): avc: denied { read } for pid=1122 comm="smbd" name="123" dev=sda1 ino=786438 scontext=system_u:system_r:smbd_t:s0 tcontext=unconfined_u:object_r:samba_share_t:s0:c0 tclass=dir But if remove the c0 category: restorecon -v /share/123/ user1 opens folder with no problem. Is I'm doing something wrong or Samba doesn't support SELinux MCS? Have installed on CentOS 6.2 are: samba3.i686 3.6.3-44.el6 @sernet-samba selinux-policy.noarch 3.7.19-126.el6_2.10 @updates selinux-policy-targeted.noarch 3.7.19-126.el6_2.10 @updates

    Read the article

  • Simple NNTP client for Ubuntu

    - by Alexander Gladysh
    I need to download a particular NNTP group. I do not need to setup any crons, put group contents in /opt and other stuff. Just run <fetch-nntp> <server> <group-name> <output-dir> and be done with it without putting a lot of garbage in the system. If that <fetch-nntp> would not fetch the same messages on the second run — fine. But I can live without it. All NNTP clients that I've looked at are trying to be a NNTP server as well. Is there something simpler that is suitable for my needs?

    Read the article

  • Sftp via shell - how is it possible?

    - by Tomasz Zielinski
    (Moved from StackOverflow: http://stackoverflow.com/questions/4589725/sftp-via-shell-how-it-is-possible) How is it possible for tools like http://mysecureshell.sourceforge.net/ to provide SFTP access by merely specifying them as shell by typing: usermod -s /bin/MySecureShell myuser ? I'm on Debian Lenny, with default sshd/OpenSSH. Is this e.g. a feature of SSH protocol that allows user shell to handle sftp commands? I can't wrap my head around this because usually OpenSSH needs sftp-server module (or the internal one in newer versions) - and this makes me think that sftp commands don't even hit the shell and are handled earlier or by different code path..

    Read the article

< Previous Page | 496 497 498 499 500 501 502 503 504 505 506 507  | Next Page >