Search Results

Search found 33182 results on 1328 pages for 'linux port'.

Page 428/1328 | < Previous Page | 424 425 426 427 428 429 430 431 432 433 434 435  | Next Page >

  • Symbolic Links Between User Accounts

    - by Pez Cuckow
    I have been using a cron job to duplicate a folder into another users account every day and someone suggested using symbolic links instead although I cannot get them to work. In summary user GAMER generates log files that they want to access via HTTP, however I only have a web-server in the user account SERVER, in the past I would copy the logs folder from GAMERS account into SERVER/public_html/. and then chmod the files so the server could access them. Trying to use symbolic links I set up a link from root (as only root can access both accounts) I used: ln -s /home/GAMER/game/logs/ /home/SERVER/public_html/logs However it seems that only root can use this link, I tried chmoding the link, all the files in the gamers /game/logs/*, /game/logs itself to 777 as well as changing chown and chgrp to server the files still cannot be read. When viewed from servers account my shell shows the link and where it is to hi-lighted in black with red text. Am I doing something wrong? Please enlighten me! /home/GAMER/game/ (chmod & chgrp) drwxrwxrwx 3 SERVER SERVER 4096 2011-01-07 15:46 logs /home/SERVER/public_html (chmod -h & chgrp -h) lrwxrwxrwx 1 server server 41 2011-01-07 19:53 logs -> /home/GAMER/game/logs/

    Read the article

  • winbind failing after a semi-random amount of time

    - by The Digital Ninja
    I have winbind set up to authenticate to our AD for samba shares. This is the third such server, and the only one having any issues. It seems after a random amount of time samba shares will just stop working. Winbind processes seem to be running but restarting them seems to fix the issue for a while. Looking at the logs have been kind of hit an miss and I don't know exactly when it fails. One interesting thing is that it seems to be pulling from another domain controller that it shoudlnt. I censored out the domain name in this example. But isnt there some way to block authentication to a domain? I'm not sure if this is a symptom or a cause of the issue. [2010/10/18 08:02:10, 0] winbindd/winbindd_cache.c:initialize_winbindd_cache(2577) initialize_winbindd_cache: clearing cache and re-creating with version number 1 [2010/10/18 09:15:54, 1] libsmb/clikrb5.c:ads_krb5_mk_req(686) ads_krb5_mk_req: krb5_get_credentials failed for [email protected] (Cannot find KDC for requested realm) [2010/10/18 09:15:54, 1] libsmb/cliconnect.c:cli_session_setup_kerberos(624) cli_session_setup_kerberos: spnego_gen_negTokenTarg failed: Cannot find KDC for requested realm [2010/10/18 09:15:54, 0] lib/util_sock.c:write_data(1139) write_data: write failure. Error = Connection reset by peer [2010/10/18 09:15:54, 0] libsmb/clientgen.c:write_socket(242) write_socket: Error writing 108 bytes to socket 18: ERRNO = Connection reset by peer [2010/10/18 09:15:54, 0] libsmb/clientgen.c:cli_send_smb(290) Error writing 108 bytes to client. -1 (Connection reset by peer)

    Read the article

  • How to fill in the network line in the ubuntu interfaces config file?

    - by matnagel
    I have to configure an ubuntu hardy server network interface. The service hoster told me that this is the network data for the machine: IP Range: 111.111.200.74 to 111.111.200.78 Netmask: 255.255.255.248 Broadcast: 111.111.200.79 Gateway: 111.111.200.73 Subnet: 111.111.200.72/29 I am only using the first IP address. I will update the /etc/hosts file with 111.111.200.74, but I am still unsure how the /etc/network/interfaces file should be. This is my plan: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 111.111.200.74 netmask 255.255.255.248 network 111.111.200.??? broadcast 111.111.200.79 gateway 111.111.200.73 As you can see I don't know how to build the network line. How would I calculate the data for the network line and what is the result? (I changed the first 2 octets of the subnet, they are not "111.111" in the real setup.)

    Read the article

  • Where is the hostname in /etc/hostname on a debian server used?

    - by Tchalvak
    I have a webserver running a number of websites. When I ssh in, it gives me a tab name of [email protected], which is counterintuitive, making it seem like those tabs are on my localhost. I would like to change the hostname, but I want to be sure that it won't break anything else. So for what purposes does that hostname string get used, and how can I be sure that it won't affect any functioning systems?

    Read the article

  • Running Mathimatica-5 remotely

    - by oxinabox.ucc.asn.au
    Ok, I have Mathmatica 5 - a powerful CAS. I have a cheap netbook, wich not olny is too slow to run mathmatica on, I doubt it has the harddrive space. I do however have remote access to a number of very powerful computers, (most of wich run variose linuxes, but one of which is windows server 2008) Mostly over SSH but other protocols can be arraged for some, i'm sure. (I might even be able to remote desktop the windows server 2008) So I'ld like to install Mathmatica onto one of these machine and then run it remotely. Either from the command line via putty or via some other method. I glanced through the mathmatical documentaion and read soemthing about using some MathLink program, wich linkes the front end istalled on my computer to a remote kernal. Anyone have any expirience with this? I'm not sure if this belongs here or in SuperUser.

    Read the article

  • Using the link command to keep backups on another drive

    - by Xavier
    I have a folder that contains a not so large amount of space called /data/backup. I have been told that if I link that folder (/data/backup) to an even bigger folder area like /bigdata/backup for example, that I will be able to execute backups to the /data/backup folder. It will then just create a link, but the data will be seen in both folders and the latter one (/bigdata/backup) will contain the backup results but it will show on both folders. Since the /bigdata/backup has far more disk space then the backup will no longer fail because of space problems in the /data/backup one. Is this true?

    Read the article

  • Start multiple instances of Firefox

    - by Vi
    How can I have multiple independent instances of Mozilla Firefox 3.5 on the same X server, but started from different user accounts (consequently, different profiles)? Limited success was only with Xephyr :1, DISPLAY=:1 /usr/local/bin/firefox, but Xephyr has no Cygwin/X's "rootless" mode so it's not comfortable (see other question). The idea is to have one Firefox instance for various "Serious Business" things and the other for regular browsing with dozens of add-ons securely isolated.

    Read the article

  • SSH Advanced Logging

    - by Radek Šimko
    I've installed OpenSUSE on my server and want to set ssh to log every command, which is send to system over it. I've found this in my sshd_config: # Logging # obsoletes QuietMode and FascistLogging #SyslogFacility AUTH #LogLevel INFO I guess that both of those directives has to be uncommented, but I'd like to log every command, not only authorization (login/logout via SSH). I just want to know, if someone breaks into my system, what did he do.

    Read the article

  • "make menuconfig" throwing cannot find -lc error in my Fedora 11 PC

    - by Sen
    When i try to do a make menuconfig in a Fedora 11 machine it is throwing the following error message: [root@PC04 kernel]# make menuconfig HOSTCC -static scripts/basic/fixdep scripts/basic/fixdep.c: In function âtrapsâ: scripts/basic/fixdep.c:377: warning: dereferencing type-punned pointer will break strict-aliasing rules scripts/basic/fixdep.c:379: warning: dereferencing type-punned pointer will break strict-aliasing rules /usr/bin/ld: cannot find -lc collect2: ld returned 1 exit status make[1]: *** [scripts/basic/fixdep] Error 1 make: *** [scripts_basic] Error 2 Please help me on this issue? How can i solve this? Thanks, Sen

    Read the article

  • Hybrid gmail MX + postfix for local accounts

    - by krunk
    Here's the setup: We have a domain, mydomain.com. Everything is on our own server, except general email accounts which are through gmail. Currently gmail is set as the MX record. The server also has various email aliases it needs to support for bug trackers and such. e.g. [email protected] |/path/to/issuetracker.script I'm struggling with a setup that allows the following, both locally and from user's email clients. guser1 - has a gmail account and a local account guser2 - only has a gmail account bugs - has a pipe alias in /etc/aliases for issue tracker Scenarios mail to [email protected] from local host (crons and such) needs to go to gmail account mail to [email protected] from local host mail to [email protected] needs to be piped to the local issue tracker script So, the first stab was creating a transport map. In this scenario, the our server would be set as teh MX and guser* destined emails are sent to gmail. Put the gmail users in a map like so: [email protected] smtp:gmailsmtp:25 [email protected] smtp:gmailsmtp:25 Problems: Ignores extensions such as [email protected] Only works if append_at_myorigin = no (if set to yes, gmail refuses to connect with: E4C7E3E09BA3: to=, relay=none, delay=0.05, delays=0.02/0.01/0.02/0, dsn=4.4.1, status=deferred (connect to gmail-smtp-in.l.google.com[209.85.222.57]:25: Connection refused)) since append_at_myorigin is set to no, all received emails have (unknown sender) The second stab was to set explicit localhost aliases in /etc/aliases and do a domain wide forward on mydomain. This too requires setting the local server as the MX: root: root@localhost # transport mydomain.com smtp:gmailsmtp:25 Problems: * If I create a transport map for a domain that matches "$myhostname", the aliases file is never parsed. So when a local user (or daemon) sends an email like: mail -s "testing" root < text.txt Postfix ignores the /etc/alias entry and maps to [email protected] and attempts to send it to the gmail transport mapping. Third stab: Create a subdomain for the bugs, something like bugs.mydomain.com. Set the MX for this domain to local server and leave the MX for mydomain.com to the Gmail server. Problems: * Does not solve the issue with local accounts. So when the bug tracker responds to an email from [email protected], it uses a local transport and the user never receives the email. % postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_at_myorigin = no append_dot_mydomain = no biff = no config_directory = /etc/postfix inet_interfaces = all mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 mydestination = $myhostname, localhost.$myhostname, localhost myhostname = mydomain.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relayhost = smtp_tls_cert_file = /etc/ssl/certs/kspace.pem smtp_tls_enforce_peername = no smtp_tls_key_file = /etc/ssl/certs/kspace.pem smtp_tls_note_starttls_offer = yes smtp_tls_scert_verifydepth = 5 smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) smtpd_recipient_restrictions = permit_mynetworks, reject_invalid_hostname, reject_non_fqdn_sender, reject_non_fqdn_recipient, reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_unauth_destination smtpd_tls_ask_ccert = yes smtpd_tls_req_ccert = no smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport

    Read the article

  • Nginx Forward SSL for single site

    - by Will.brown
    I have a nginx server setup and it works fine for http however i would like to bypass the proxy for https connection. I want it so that when someone goes to my ip https:// ip1 (Nginx server) it bypasses ngix and forwards all traffic to https:// ip2(webserver) i do not need ngix to do this for any ssl website just one particular website. SO Client to https:// ip1 to https:/ /ip2 to https:// ip1 to client pc I just want the nginx to not intercept the connection and forward it on and on return forward the connection to client Im guessing i do this by nat mascarade buy not exactly sure how to do it and if i will need to tell nginx to ignore ssl aswell can someone help me please this has gone me stuck

    Read the article

  • Run a script after killing lxsession (xorg)

    - by user284194
    I am trying to run a program automatically within a bash script after killing the LXDE session. My script consists of: #!/bin/sh pkill lxsession; sh /home/pi/RetroPie/EmulationStation/emulationstation My aim is to log out of the LXDE session and run EmulationStation on my Raspberry Pi with a bash script. I'm using pkill lxsession; to bypass lxsession's logout confirmation dialog. As it stands, this script just gets me to the command line from a working LXDE desktop. Thanks for reading.

    Read the article

  • How to prioritize openvpn traffic?

    - by aditsu
    I have an openvpn server, with one network interface. VPN traffic is extremely slow. I tried to do traffic control with this configuration (currently): qdisc del dev eth0 root qdisc add dev eth0 root handle 1: htb default 12 class add dev eth0 parent 1: classid 1:1 htb rate 900mbit #vpn class add dev eth0 parent 1:1 classid 1:10 htb rate 1500kbit ceil 3000kbit prio 1 #local net class add dev eth0 parent 1:1 classid 1:11 htb rate 10mbit ceil 900mbit prio 2 #other class add dev eth0 parent 1:1 classid 1:12 htb rate 500kbit ceil 1000kbit prio 2 filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip sport 1194 0xffff flowid 1:10 filter add dev eth0 protocol ip parent 1:0 prio 2 u32 match ip dst 192.168.10.0/24 flowid 1:11 qdisc add dev eth0 parent 1:10 handle 10: sfq perturb 10 qdisc add dev eth0 parent 1:11 handle 11: sfq perturb 10 qdisc add dev eth0 parent 1:12 handle 12: sfq perturb 10 But it's still extremely slow. I have an imaps connection that keeps transferring data continuously (I successfully limited the rate) but with openvpn I can't seem to get more than about 100kbit/s The internet connection speed is about 3mbit/s (symmetric) What could be the problem? Does the sport filter work for udp?

    Read the article

  • HP Server Automation - agent misreporting hostname

    - by warren
    I've been using HP Server Automation for some time, but have noticed an interesting issue I'm hoping the SF community has seen / knows a workaround to. When the management agent on Solaris or RHEL (only platforms I've noticed it on) reports the hostname of the managed server, it does not return the value of hostname, it returns the first alias to that entry in /etc/hosts. Any ideas on how to get around that? Other than editing /etc/hosts so the alias is at the end of the line instead of the front?

    Read the article

  • Upgrade to ubuntu 13.04 from 12.04 with iso image

    - by Digvijay Yadav
    I have ubuntu 12.04 installed on my system. I want to upgrade it to ubuntu 13.04. I want to do this upgrade using an iso image of ubuntu 13.04. I tried this Solution But it didn't work for me. After running these command I didn't get any alerts about updating. Also I don't understand the gksu part of the solution. Here are the steps I tried: sudo mount -t iso9660 -o loop PATH/TO/ISO /cdrom then sudo /cdrom/cdromupgrade Read more: http://linuxpoison.blogspot.tw/2011/06/how-to-upgrade-ubuntu-using-alternate.html#ixzz2SFMqlOPx I also wanted to know, If I can do this using a networked computer. By this I mean the iso file is on some other computer. Thank you.

    Read the article

  • installing and running google-chrome on an old Ubuntu 7.10 legacy system

    - by 12632
    I am trying to get google-chrome to work on Ubuntu 7.10. I installed it with --force-depends and got it to install, but now when I try to run it, I get this error: /usr/bin/google-chrome: error while loading shared libraries: libnss3.so: cannot open shared object file: No such file or directory Is there a way to still get google-chrome to load even without this dependency satisfied? This is an old system that needs to keep this old 7.10 Ubuntu version and I would like to have google-chrome if possible installed, even if it means no sound or other features that are not compatible.

    Read the article

  • Can I mark a folder as mountpoint-only?

    - by Collin
    I have a folder ~/nas which I usually use sshfs to mount a network drive on. Today, I didn't realize the share hadn't been mounted yet, and copied some data into it. It took me a bit to realize that I'd just copied data into my own local drive rather than the network share. Is there some way to mark in the system that this folder is supposed to be a mount point, and to not let anyone copy data into it? I tried the permissions solution here: How to only allow a program to write to a directory if it is mounted?, but if I don't have write access I also can't mount anything to it.

    Read the article

  • Recommendations on managing dot files for users using Puppet

    - by Beaming Mel-Bin
    Goal is to have a collection of dot files (.bashrc, .vimrc, etc.) in a central location. Once it's there, Puppet should push out the files to all managed servers. I initially was thinking of giving users FTP access where they could upload their dot files and then having an rsync cron job. However, it might not be the most elegant or robust solution. Wanted to see if anyone else had some recommendations.

    Read the article

  • How hard for a Software Developer to Maintain a Server

    - by Samy
    I'm a software developer and don't have much experience as a sysadmin. I developed a web app and was considering buying a server and hosting the web app on it. Is this a huge undertaking for a web developer? What's the level of difficulty of maintaining a server and keeping up with the latest security patches and all that kind of fun stuff. I'm a single user, and not planning to sell the service to others. Can someone also recommend an OS for my case, and maybe some good learning resources that's concise and not too overwhelming.

    Read the article

  • Constantly diminishing free space on fedora 17

    - by Varun Madiath
    I don't know how to explain this other than to say that my computer seems to magically run out of free when it runs for a while. The output of df -h . oh my home direction is below /dev/mapper/vg_vmadiath--dev-lv_home 50G 47G 0 100% /home When I run sudo du -cks * | sort -rn | head -11 on /home I get the following output. I got this from decreasing free space on fedora 12 32744344 total 32744328 vmadiath 16 lost+found If I restart my system things seem to fix themselves and I'm left with about 20 or 25GB of free space. I'm running XFCE with XMonad as my window manager under fedora 17. Programs I'm running include the XFCE terminal, grep, find, firefox, eclipse, libre-office writer, zsh, emacs. Any help will be greatly appreciated. I'll gladly give you any other output you might need.

    Read the article

  • Is it possible to either abort or interrupt and later continue a lvconvert -m1 operation?

    - by SLi
    I have run the command lvconvert -m1 rootvg/newroot /dev/sdb to convert a linear logical volume to a mirrored one. The operation has not yet finished; I interrupted the command with ctrl-c at around 10% mark, but the operation seems to be running in the background anyway. Is it possible to either 1) Abort the lvconvert operation and revert to the state before it? (This would be my preferred option) 2) To safely interrupt the operation and resume it later?

    Read the article

  • Amarok 2.1.1 Does not go to the next song

    - by nigative
    Hi, I just updated Amarok from KDE3 version and it looks a bit weird and different. But my problem is after I updated my music collection and started to play it (it is all loaded into my playlist), amarok doesn't start next song after the current song is finished. I have repeat(repeat playlist) and random(random tracks) options enabled. Thanks.

    Read the article

  • Why is scp not overwriting my destination file?

    - by Noli
    I'm trying to back up a file via the command scp /tmp/backup.tar.gz hostname:/home/user/backup.tar.gz When I run it, the scp progress bar shows up and it looks like its transferring the file, however when I log into the destination server to check the file, the timestamp and filesize haven't changed from the older version, so it looks like scp didn't overwrite the old file at all. It only sees to work when I manually delete the file from the destination server. I'm running ubuntu, and this is happening on two servers: one cygwin ssh, and one fedora core 3. Anyone have any idea why this is happening? I thought scp would ONLY overwrite existing files.. Thanks

    Read the article

< Previous Page | 424 425 426 427 428 429 430 431 432 433 434 435  | Next Page >