Search Results

Search found 89712 results on 3589 pages for 'ubuntu file osb'.

Page 123/3589 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • Dell OpenManage On Ubuntu Server 12.04 Cannot Log In

    - by Austin
    I have a Dell Poweredge 2950 with 2X130GB and 2X2TB drives. I need to set them up in a RAID 1 array so that the 130GB Drives are mirrored and host the OS, while the 2TB drives are mirrored and are the content drives. So I go from 4 disks, down to two, one 130GB and one 2TB. I can do that in the BIOS RAID utility no problem. But I need to be able to manage the RAID arrays and be able to expand them WITHOUT shutting down the server. Now, to my understanding, openmanage will allow me to do that AND it runs on ubuntu. So I go and set it up and try to log into the web interface at and it will not let me log in. I have followed dell's guide to set up openmanage, even added the usernames to the files and permissions and such, however, cannot get it to let me log in or anything. I have reinstalled Openmanage several times, even reinstalled the OS three times, and nothing works. Google does not help either. It simply says login failed after hitting submit. Please Help

    Read the article

  • Configuring three monitors with two Radeon X1600/X1650 graphics cards under Ubuntu

    - by cpm
    I have three SyncMaster 932a monitors I want to use with two Radeon X1600/X1650 cards under Linux. I am running X.org X Server 1.6.0, as provided by Ubuntu's Wubi installer. After turning off mirroring, I ended up with this xorg.conf: Section "Monitor" Identifier "Configured Monitor" EndSection Section "Screen" Identifier "Default Screen" Monitor "Configured Monitor" Device "Configured Video Device" SubSection "Display" Virtual 2560 1024 EndSubSection EndSection Section "Device" Identifier "Configured Video Device" EndSection The left monitor had a menu bar and a task bar, the center monitor was just desktop, and windows would maximize to the current monitor. The third monitor and second graphics card weren't being used at all. Then I changed my configuration to manually specify each card with their PCI bus: Section "ServerLayout" Identifier "TheLayout" Screen 0 "Radeon Screen 1" Screen 1 "Radeon Screen 2" RightOf "Radeon Screen 1" EndSection Section "Screen" Identifier "Radeon Screen 1" Monitor "Configured Monitor" Device "Radeon the First" SubSection "Display" Virtual 2560 1024 EndSubSection EndSection Section "Screen" Identifier "Radeon Screen 2" Monitor "Configured Monitor" Device "Radeon the Second" EndSection Section "Device" Identifier "Radeon the First" Driver "radeon" BusID "PCI:1:0:0" EndSection Section "Device" Identifier "Radeon the Second" Driver "radeon" BusID "PCI:2:0:0" EndSection Section "Monitor" Identifier "Configured Monitor" EndSection Now both the left and right monitors have task bars and menu bars. Windows cannot be dragged from the first two monitors to the third monitor. Also, maximizing in the left or center window fills both monitors. I also tried adding Option "Xinerama" "true" to the ServerLayout section. X11 wasn't able to start up. I want to: Allow moving windows along all three monitors. Maximizing only fills the current monitor. Either have menu/task bars on only the left monitor or all three monitors How can I make this possible?

    Read the article

  • HOw make new copied file always 777 permission

    - by Master
    I have one Linux ext3 partition shared on network. Now when some one copy files from MAc , then other people can't change the file dute to permission problem. Is there any way that ane new file which is copied will always have 777 permission and some specific user as owner of file not the default user thanks

    Read the article

  • Mongrel Cluster on Ubuntu Server Karmic

    - by trobrock
    I am trying to get mongrel cluster working on my Ubuntu Server Karmic box in preparation to setup Capistrano. I've been trying to get the two to work all day and finally decided to completely remove Capistrano and see if I can just get Mongrel Cluster to work. I ran this to install mongrel cluster: gem install mongrel mongrel_cluster Everything installed fine, when I change into my app's directory... # mongrel_rails -bash: mongrel_rails: command not found I can run it from its install location: # /var/lib/gems/1.8/bin/mongrel_rails Usage: mongrel_rails <command> [options] Available commands are: ... It lets me build the cluster configuration file fine, but when I run the clister:start command: # /var/lib/gems/1.8/bin/mongrel_rails cluster::start starting port 8000 /usr/lib/ruby/1.8/rubygems/custom_require.rb:31: command not found: mongrel_rails start -d -e production -p 8000 -P tmp/pids/mongrel.8000.pid -l log/mongrel.8000.log starting port 8001 /usr/lib/ruby/1.8/rubygems/custom_require.rb:31: command not found: mongrel_rails start -d -e production -p 8001 -P tmp/pids/mongrel.8001.pid -l log/mongrel.8001.log starting port 8002 /usr/lib/ruby/1.8/rubygems/custom_require.rb:31: command not found: mongrel_rails start -d -e production -p 8002 -P tmp/pids/mongrel.8002.pid -l log/mongrel.8002.log It seems it isnt calling it from the right directory after that command, what can I do to fix this? I tried setting the path previously when trying to set up Capistrano, but the path didnt stay set when Capistrano used ssh to run the commands.

    Read the article

  • Changing terminal colors in Ubuntu Server

    - by Josh K
    I'd like to change the colors. The lime green hilighting on pale blue colored file names is killing my eyes. I'm not sure if I'm using xterm or gnome or whatever, but I would like to change the default color scheme (preferable to something less offensive to my corneas) and have it stay changed (update my user profile). Colors are nice, but sometimes it makes the text unreadable. I would settle for having no colors, standard B&W, if I can't have nice colors.

    Read the article

  • Enable acess without WWW on Ubuntu

    - by TiuTalk
    Hi there... I want to enable the acess to my site without the "www." prefix... I tryed to insert this in my /etc/apache2/sites-available file: <VirtualHost *:80> serverName mydomain.gov.br serverAlias www.mydomain.gov.br ServerAdmin [email protected] DocumentRoot /var/www/mydomain/ ... (lot's of other configs) </VirtualHost> But this isn't working... :(

    Read the article

  • Ubuntu 10.04 RGBA(Transparent theme) bug

    - by Nrew
    I tried to follow this tutorial frome ghacks.net. But I end up with a bug. Everytime I try to change the desktop background or the theme. It opens up lots of folders. And then close it back again. So I cannot do anything when I try to change the background or the theme to the default. Here is the tutorial And I can't even shutdown my machine now, please help. EDIT I tried executing these commands in the terminal sudo add-apt-repository ppa:erik-b-andersen/rgba-gtk sudo apt-get update && sudo apt-get upgrade sudo apt-get install gnome-color-chooser gtk2-module-rgba sudo apt-get install murrine-themes And it installed gnome color changer. It works, and the windows became transparent, but I can't change to the default theme and I can't put a background Here's the screenshot, when I try to change something in the preferences: Its just okay if I can't change the theme, but it won't let me change the background either. Or maybe the background is changed but its transparent just like the other windows. You can see that in the taskbar(or whatever its called in Ubuntu). Its full. What can I do to atleast make the background visible.

    Read the article

  • Debian/Ubuntu - No network connection

    - by leviathanus
    I have a very weird situation on my Ubuntu 12.04 LTS Server. I can not access (ping) my gateway, although I believe my config is ok - I attach the outputs. Any hints where to look? (I changed the beginning of the IP to something different, just obfuscation) ping 5.9.10.129 PING 5.9.10.129 (5.9.10.129) 56(84) bytes of data. From 5.9.10.129 (5.9.10.129) icmp_seq=2 Destination Host Unreachable From 5.9.10.129 (5.9.10.129) icmp_seq=3 Destination Host Unreachable From 5.9.10.129 (5.9.10.129) icmp_seq=4 Destination Host Unreachable uname -r 3.2.0-29-generic ifconfig eth0 eth0 Link encap:Ethernet HWaddr 3c:97:0e:0e:54:d7 inet addr:5.9.10.142 Bcast:5.9.10.159 Mask:255.255.255.224 inet6 addr: fe80::8e70:5aff:feda:c4ac/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1216 errors:0 dropped:0 overruns:0 frame:0 TX packets:490 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:107470 (107.4 KB) TX bytes:34344 (34.3 KB) Interrupt:17 Memory:d2500000-d2520000 ip route default via 5.9.10.129 dev eth0 metric 100 5.9.10.128/27 via 5.9.10.129 dev eth0 5.9.10.128/27 dev eth0 proto kernel scope link src 5.9.10.142 route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 5.9.10.129 0.0.0.0 UG 1000 0 0 eth0 5.9.10.128 5.9.10.129 255.255.255.224 UG 0 0 0 eth0 5.9.10.128 0.0.0.0 255.255.255.224 U 0 0 0 eth0 iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination UPD: Eric, this is how routing information looks on a working server: 0.0.0.0 78.47.198.49 0.0.0.0 UG 100 0 0 eth0 78.47.198.48 78.47.198.49 255.255.255.240 UG 0 0 0 eth0 78.47.198.48 0.0.0.0 255.255.255.240 U 0 0 0 eth0 As I understand it, Hetzner tries to ensure security by this, so I can not take over an IP by changing my MAC. But this is another server, which has another netmask (255.255.255.240) UPD2: BatchyX, on the working server: 78.47.198.49 dev eth0 src 78.47.198.60 cache on the broken: 5.9.10.129 dev eth0 src 5.9.10.142 cache

    Read the article

  • How can I fix a broken AVI file?

    - by KeyboardMonkey
    I have a broken AVI, it won't play in VLC, Xine or MPlayer. I tried Handbrake (reads the file and resets the source to None), OggConvert, Avidemux and mencoder fail to read the file, I cannot seem to reencode this file. I suspect the header info is corrupt, is there a way to get the a/v streams out with a missing header?

    Read the article

  • Ubuntu Equivalent of Unix Command cp -n

    - by Ted Karmel
    A software I need to install on my Ubuntu Hardy has a MAKE file which includes the command cp -n. However, I get an error stating the -n is an invalid option. The command will work on a Mac terminal but I need it to work on Ubuntu. Does anyone know the equivalent command for Ubuntu? Thanks.

    Read the article

  • How to install rmagick on Ubuntu 10.04?

    - by Andrew
    Here's what I've done so far: sudo apt-get install imagemagick libmagickcore-dev This did not throw any errors, so I think that ImageMagick is installed fine. Then I tried installing the gem: sudo gem install rmagick This resulted in the following error: ERROR: Error installing rmagick: ERROR: Failed to build gem native extension. /usr/bin/ruby1.8 extconf.rb checking for Ruby version >= 1.8.5... yes checking for gcc... yes checking for Magick-config... yes checking for ImageMagick version >= 6.4.9... yes checking for HDRI disabled version of ImageMagick... yes checking for stdint.h... yes checking for sys/types.h... yes checking for wand/MagickWand.h... no Can't install RMagick 2.13.1. Can't find MagickWand.h. *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby1.8 Gem files will remain installed in /usr/lib/ruby/gems/1.8/gems/rmagick-2.13.1 for inspection. Results logged to /usr/lib/ruby/gems/1.8/gems/rmagick-2.13.1/ext/RMagick/gem_make.out What do I need to do to install rmagick on Ubuntu 10.04?

    Read the article

  • .tex file remains in use by process when batch file is triggered by .Rnw Sweave processing.

    - by drknexus
    This is a pretty specialized question. I'm using the Eclipse IDE in a Windows XP environment with the StatET plug-in so I can write R code as an R/Sweave document. This produces a .tex file that is then post processed by pdflatex.exe. When I create the file as normal everything works great (except maybe my file named russfnc2.Rnw seems to result in russfnc.pdf even though pdflatex.exe on the console window correctly says that the output is being writen to russfnc2.pdf). The big problem is when I trigger a batch file from within my Rnw code. My goal here is to spawn a side process that waits for the PDF to be made and uploads it to the server. So the Rnw contains: if(file.exists("rsp.finalize.bat")) {system("rsp.finalize.bat",wait=FALSE,invisible=FALSE)} The batch file calls Rterm.exe to run a script: setwd("C:/theprojectdirectory") while(!file.exists("russfnc.pdf")) { Sys.sleep(1) } Sys.sleep(60) At the end of that script, I use a shell call to launch psftp.exe and upload the files. All of this works fine, when I use my Eclipse profile to trigger Sweave... that is unless I have that batch file at the end of the .Rnw. When it is located there, I get the error message pdflatex.exe: Permission denied: c:\thepath\thetexfile.tex. After that, the .tex file (as far as XP is concerned) is in use by another process and I have to reboot in order to delete it (and, of course, the pdf is not made). If I manually trigger the batch file after pdflatex.exe has done its things, everything works fine. How can I make this work correctly using the tools I'm familiar with vis., R and Dos-style batch files? I'm not sure if this is a SuperUser question or a StackOverflow question, so I'm starting here.

    Read the article

  • Netbook (Samsung N220) on Ubuntu 10.04 slows down WiFi for other computers

    - by Joachim
    I encountered a really odd problem with my new netbook. I am running Ubuntu 10.04 on a Samsung N220 Mito. So far everything worked fine. Now I tried the machine for the first time in our work group where we have a wifi (with internet access) for all laptops. The wifi is controlled by a computer running Suse 9.3 which provides a DHCP server and imposes a firewall. At the moment there is only a macbook in the wifi, where no problems with the internet or wifi connection are encountered. Now coming to my actual problem: In addition to the macbook i connect the Samsung N220 to the Wifi. Problem: My download speed is for some reason limited to 70KB/s max. This is neither a limitation of the server/website i browse on, nor a configuration of the netbook: at home i have 500KB/s download speeds. Furthermore, it is not a default limitation for "untrusted" or "new machines" in the wifi, as for instance other new laptops get full speed internet with our wifi. Problem: Once the Samsung N220 is generating traffic in the wifi, the wifi is slowed down dramatically for all other machines: I run a ping to the router from the macbook. The ping times with the N220 ideling are 2-6ms. When I start downloading or browsing in the web with the N220 the ping speed drops to 800ms. Vice versa, when the macbook is generating the traffic the ping of the N220 to the rooter stays constant at around 2-6ms. So clearly, it is some problem originating from my netbook or maybe its treatment in the wifi. Thanks for any help

    Read the article

  • OpenVPN bridged not pinging beyond openvpn server on Ubuntu/Windows 2003

    - by ani
    I set up an OpenVPN server using Ubuntu and a windows server 2003 client to interconnect two networks between two different offices. They can now ping each other, but the rest of the network cannot be contacted by the windows client. Office 1 has internal network of: 192.168.0.0 255.255.240.0 Office 2 has internal network of: 192.168.16.0 255.255.255.0 And the configuration files are: Server.conf port 1194 --script-security 2 up "/etc/openvpn/up.sh br0" down "/etc/openvpn/down.sh br0" # TCP or UDP server? ;proto tcp proto udp dev tap0 ;dev tun ca ca.crt cert openvpn.crt key openvpn.key dh dh1024.pem ifconfig-pool-persist ipp.txt server-bridge 192.168.0.59 255.255.240.0 192.168.6.72 192.168.6.75 push "route 192.168.0.0 255.255.240.0" push "dhcp-option DNS 192.168.0.2" push "dhcp-option DOMAIN testeers.local" keepalive 10 120 tls-auth ta.key 0 # This file is secret comp-lzo user nobody group nogroup persist-key persist-tun log /var/log/openvpn/openvpn.log status /var/log/openvpn-status.log verb 3 Client Config file client dev tap ;dev tun --script-security 2 ;proto tcp proto udp remote 1xx.2xx.xxx.124 1194 resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert admin-VAIO.crt key admin-VAIO.key ns-cert-type server tls-auth ta.key 1 comp-lzo verb 3 Ifconfig on the server now shows the following: br0 Link encap:Ethernet HWaddr 00:50:56:8b:1a:49 inet addr:192.168.0.59 Bcast:192.168.15.255 Mask:255.255.240.0 inet6 addr: fe80::250:56ff:fe8b:1a49/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1081860 errors:0 dropped:1358 overruns:0 frame:0 TX packets:242385 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:76600615 (76.6 MB) TX bytes:64474575 (64.4 MB) eth0 Link encap:Ethernet HWaddr 00:50:56:8b:1a:49 UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:1144125 errors:0 dropped:7172 overruns:0 frame:0 TX packets:252486 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:109893729 (109.8 MB) TX bytes:66372620 (66.3 MB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:67865 errors:0 dropped:0 overruns:0 frame:0 TX packets:67865 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:5183276 (5.1 MB) TX bytes:5183276 (5.1 MB) tap0 Link encap:Ethernet HWaddr 32:4f:42:11:b7:c5 inet6 addr: fe80::304f:42ff:fe11:b7c5/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:3329 errors:0 dropped:0 overruns:0 frame:0 TX packets:215472 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:373205 (373.2 KB) TX bytes:17465832 (17.4 MB)

    Read the article

  • setting up phpmyadmin with nginx within ubuntu 11.04

    - by Patrick
    I have nginx and php5-fpm running on ubuntu 11.04. I have installed phpmyadmin but im having trouble accessing it. I would like to access it via http://localhost/phpmyadmin I've used all the default locations for the nginx, php5, and phpmyadmin installs. I'm being directed to use the block below by the blog guide im following, but im not sure what to change to get it to point how im wanting it to. server { listen 80; server_name php.example.com; // <-I know i need to edit this, but not sure to what. access_log /var/log/nginx/localhost.access.log; root /usr/share/phpmyadmin; index index.php; location / { try_files $uri $uri/ @phpmyadmin; } location @phpmyadmin { fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME /usr/share/phpmyadmin/index.php; include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_NAME /index.php; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/phpmyadmin$fastcgi_script_name; include fastcgi_params; } }

    Read the article

  • SNMP Access on Ubuntu

    - by javano
    I am trying to use SNMP to monitor a machine locally on its self and remotely. This is the snmpd.conf (Ubuntu 8.04.1): # sec.name source comunity com2sec readonly 1.2.3.4 nicenandtight com2sec readonly 5.6.7.8 reallysafe group MyROGroup v1 readonly group MyROGroup v2c readonly group MyROGroup usm readonly view all included .1 view system included .iso.org.dod.internet.mgmt.mib-2.system access MyROGroup "" any noauth exact all none none syslocation my house syscontact me <[email protected]> exec .1.3.6.1.4.1.2021.7890.1 distro /usr/bin/distro smuxpeer .1.3.6.1.4.1.674.10892.1 includeAllDisks 95% 1.2.3.4 is the local machines IP and everything is working locally. 5.6.7.8 is the remote machine and initially I am just trying to touch SNMPD with snmpwalk from the remote machine; snmpwalk -v 2c -c reallysafe 1.2.3.4 Timeout: No Response from 1.2.3.4 I have added to iptables as the very first rule; -A INPUT -p udp -m udp --dport 161 -j ACCEPT With such a loose iptables rule I can't see why I can't even touch the SNMPD on that Uubuntu Machine. There are more specific rules further down the table but as I couldn't connect I added the above. TCPDump shows the UDP packets coming in. What could be going wrong here?

    Read the article

  • configuration of zend frame work in ubuntu

    - by Rahul Mehta
    Hi, I have created a project zfapi by zf command in ubuntu. Now http://myserverpath.com/zfapi/ gives me listing of folder public application and others. http://myserverpath.com/zfapi/public give me the index page index.php. and i have made the UserController.php in application/controllers but by http://myserverpath.com/zfapi/user/ is saying user not found. what configuration i need to set for running it proper. I had set my /etc/apache2/apache2.conf added the following in the last . <VirtualHost *:80> DocumentRoot "/var/www/mypath/zfapi" <Directory "/var/www/mypath/zfapi"> Order allow,deny Allow from all AllowOverride all </Directory> </VirtualHost> is giving me this error while restarting server. [Sat Jan 08 13:32:53 2011] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName [Sat Jan 08 13:33:03 2011] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results what this i should do .?

    Read the article

  • Ubuntu 12.04 crash analysis - strange binary data on all open files at the moment of crash

    - by lanbo
    A couple of hours ago we got a system crash on Ubuntu 12.04. We checked all the log files and there is nothing suspicious to blame to. Last stuff that was logged was some dovecot activity. There are no kernel panic messages. Nothing. It is a new server (new hardware) we are testing before production. And because it is new hard, I'm suspicious the problem may be due to some faulty hardware. We already run memtester with no problem detected. I'll be happy to hear from other hardware testing tools (the machine has SSD). Anyway, the thing I wanted to ask you is a different one. The strange thing is on every open file at the moment of the crash we found the next sequence of symbols was written into them: "@^@^@^@^@^@^@...". For example, on the syslog log file we got: Apr 16 15:53:56 odyssey dovecot: pop3-login: Aborted login (auth failed, 1 attempts): user=<info>, method=PLAIN, rip=46.29.255.73, lip=5.9.58.177 ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^ [these continues for about 1000 chars...] ^@^@^@^@Apr 16 15:55:12 odyssey kernel: imklog 5.8.6, log source = /proc/kmsg started. We got all these symbols in all open files. These include: syslog, mail.log, kern.log, ... But also on some logs that are output by php scripts run in CRONs from user accounts (not root). So, any idea why all open files got these characters written during the crash? This is pretty bad since the crash corrupted many files (we don't even know which other ones may be affected). We are suspicious that all open files (in write mode maybe) at the moment of the crash got all these symbols inserted. Why is that? BTW [in case it helps], the system automatically rebooted after the crash but Apache did not start. There were not traces in /var/apache2/*log why apache did not start. After running a "service apache2 start" it started with no problems. Also, we rebooted the machine manually and Apache also started on reboot. But it did not start after the crash and no errors were reported. Thanks guys!

    Read the article

  • SeLinux blocking connection to sshd on Ubuntu 9.10

    - by Barton Chittenden
    When I try to log on to my laptop, which runs Ubuntu 9.10, the server rejects my login attempts. Checking /var/log/auth.log, I see the following: Feb 14 12:41:16 tiger-laptop sshd[6798]: error: ssh_selinux_getctxbyname: Failed to get default SELinux security context for tiger I googled for this, and ran across the following: http://www.spinics.net/lists/fedora-.../msg13049.html Here's the part that I think relates to the problem that I'm having: Quote: What's wrong on my system? Why it's not possible to login even if selinux is in permissive mode? Any suggestions? I'd start by trying to figure out why sshd isn't running in sshd_t (it seems to be running in sysadm_t). Paul. selinux mailing list selinux@xxxxxxxxxxxxxxxxxxxxxxx https://admin.fedoraproject.org/mail...stinfo/selinux Yes, sshd is running in sysadm_t: ps axZ | grep sshd system_u:system_r:sysadm_t 3632 ? Ss 0:00 /usr/sbin/sshd -o PidFile=/var/run/sshd.init.pi ls -Z /usr/sbin/sshd system_ubject_r:sshd_exec_t /usr/sbin/sshd Don't know why it's not sshd_t. I didn't modified something. It's a standard installation of sles11 with the default reference policy from tresys. Maybe this code snippet from policy/modules/services/ssh.te is responsible for that: Allow ssh logins as sysadm_r:sysadm_t gen_tunable(ssh_sysadm_login, true) Any ideas? Do you have boolean init_upstart set to on? if not try setting it to on. I do not believe ssh_sysadm_login boolean works currently but i may be mistaken. -- Yeah, setting init_upstart to on did the trick! THANK A LOT! Do you know why this prevents the user from logging in through ssh even if selinux is set to permissive?? Ok, so the million dollar question is "where do I set 'init_upstart=1'"? It's not clear from context which configuration file needs to be edited, and I'm not at all familiar with SELinux configuration.

    Read the article

  • Access to File being restricted after Ubuntu crashed

    - by Tim
    My Ubuntu 8.10 crashed due to the overheating problem of the CPU when I am opening some directory and intend to do some file transfer under Nautilus. After reboot, under gnome, all the files cannot be removed, their properties cannot be viewed and they can only be opened, although all are still fine under terminal. I was wondering why is that and how can I fix it? Thanks and regards UPdate $ cat /etc/mtab /dev/sda7 / ext3 rw,relatime,errors=remount-ro 0 0 tmpfs /lib/init/rw tmpfs rw,nosuid,mode=0755 0 0 /proc /proc proc rw,noexec,nosuid,nodev 0 0 sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0 varrun /var/run tmpfs rw,nosuid,mode=0755 0 0 varlock /var/lock tmpfs rw,noexec,nosuid,nodev,mode=1777 0 0 udev /dev tmpfs rw,mode=0755 0 0 tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0 devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=620 0 0 fusectl /sys/fs/fuse/connections fusectl rw 0 0 lrm /lib/modules/2.6.27-15-generic/volatile tmpfs rw,mode=755 0 0 /dev/sda8 /home ext3 rw,relatime 0 0 /dev/sda2 /windows-c vfat rw,utf8,umask=007,gid=46 0 0 /dev/sda5 /windows-d fuseblk rw,allow_other,blksize=4096 0 0 securityfs /sys/kernel/security securityfs rw 0 0 binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,noexec,nosuid,nodev 0 0 gvfs-fuse-daemon /home/tim/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,user=tim 0 0

    Read the article

  • ubuntu dmidecode is not functioning properly

    - by Alaa Alomari
    dmidecode is giving irrelevant and conflicted results. it shows that i have two slots while the correct is 8 (the board is Tyan S5350.) uname -a Linux synd01 3.0.0-16-server #29-Ubuntu SMP Tue Feb 14 13:08:12 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux root@synd01:/home/badmin# dmidecode -t 16 dmidecode 2.9 SMBIOS 2.33 present. Handle 0x0011, DMI type 16, 15 bytes Physical Memory Array Location: System Board Or Motherboard Use: System Memory Error Correction Type: None Maximum Capacity: 4 GB Error Information Handle: Not Provided Number Of Devices: 2 while root@synd01:/home/badmin# dmidecode -t 17 | grep Size Size: No Module Installed Size: No Module Installed Size: 1024 MB Size: 1024 MB Size: No Module Installed Size: No Module Installed Size: 1024 MB Size: 1024 MB also lshw shows: *-memory description: System Memory physical id: 11 slot: System board or motherboard size: 4GiB *-bank:0 description: DIMM DDR Synchronous 166 MHz (6.0 ns) [empty] physical id: 0 slot: J3B1 clock: 166MHz (6.0ns) *-bank:1 description: DIMM DDR Synchronous 166 MHz (6.0 ns) [empty] physical id: 1 slot: J3B3 clock: 166MHz (6.0ns) *-bank:2 description: DIMM DDR Synchronous 166 MHz (6.0 ns) physical id: 2 slot: J2B2 size: 1GiB width: 64 bits clock: 166MHz (6.0ns) *-bank:3 description: DIMM DDR Synchronous 166 MHz (6.0 ns) physical id: 3 slot: J2B4 size: 1GiB width: 64 bits clock: 166MHz (6.0ns) *-bank:4 description: DIMM DDR Synchronous 166 MHz (6.0 ns) [empty] physical id: 4 slot: J3B2 clock: 166MHz (6.0ns) *-bank:5 description: DIMM DDR Synchronous 166 MHz (6.0 ns) [empty] physical id: 5 slot: J2B1 clock: 166MHz (6.0ns) *-bank:6 description: DIMM DDR Synchronous 166 MHz (6.0 ns) physical id: 6 slot: J2B3 size: 1GiB width: 64 bits clock: 166MHz (6.0ns) *-bank:7 description: DIMM DDR Synchronous 166 MHz (6.0 ns) physical id: 7 slot: J1B1 size: 1GiB width: 64 bits clock: 166MHz (6.0ns) what might cause this conflict and how can i fix it? Thanks

    Read the article

  • Install VirtualBox on Ubuntu 12.04.1 (on [Samsung] Chromebook)

    - by iphonedev7
    I have dual booted Ubuntu Linux 12.04.1 LTS on my Samsung Series 5 ChromeBook, and am trying to run/install Oracle VirtualBox (from the generic .run file downloaded from their website). However, every time I try to run it (as root from the command line), it gives me the following error occurs: Please install the build and header files for your current Linux kernel. The current kernel version is 3.4.0 Problems were found which would prevent VirtualBox from installing. I have tried the version from the Software Center, as well as the command line installation, both of which gave me errors based on my linux-headers/linux-kernel/linux-[kernel]-image. Here's an error I keep getting (on the command line): First Installation: checking all kernels... It is likely that 3.4.0 belongs to a chroot's host Building only for 3.5.0-18-generic Building initial module for 3.5.0-18-generic ERROR (dkms apport): kernel package linux-headers-3.5.0-18-generic is not supported Error! Bad return status for module build on kernel: 3.5.0-18-generic (x86_64) Consult /var/lib/dkms/virtualbox/4.1.12/build/make.log for more information. Setting up virtualbox-qt (4.1.12-dfsg-2ubuntu0.2) ... Processing triggers for libc-bin ... ldconfig deferred processing now taking place ...And one of the more cryptic errors I get when trying to start any Virtual Machine: Result Code: NS_ERROR_FAILURE (0x80004005) Component: Machine Interface: IMachine {5eaa9319-62fc-4b0a-843c-0cb1940f8a91}

    Read the article

  • Running multiple copies of openssh-server (sshd) on Ubuntu

    - by cecilkorik
    I may be attacking this problem the wrong way, if so let me know. I have a server which is available through SSH from both the public internet and the local LAN. I would like to have two very different security policies for each, by running two copies of sshd with two different sshd_config files each on a different port. Some of the things I'd like to change is to allow password or public-key authentication on the LAN, but public-key only from the internet. All (real) users could login from the LAN side, but only certain authorized users would be individually whitelisted to login through the internet. As far as I can tell this requires having two different SSH daemons running on different ports with different sshd_configs. I am fine with the different ports part, I can easily forward port 22 to any port I want through my firewall. So my question is what is the best way to actually START the second sshd under Ubuntu 10.04 LTS. Is there a recommended way to do something like this? Surely I am not the first person with this sort of need. I have a bit of experience with upstart, and I can manually hack the second sshd into /etc/init/ssh.conf I suppose but I'm not sure if that will get overwritten by the package. However I do it, It's important to ensure both sshd processes always get restarted after any automatic or manual upgrade of the openssh-server package. Thanks in advance.

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >