Search Results

Search found 10583 results on 424 pages for 'dev groups'.

Page 104/424 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • My sound stopped working today, how can I fix it?

    - by Oli
    This seems to be a problem with pulseaudio. I was logged in over VNC on my phone and started playing a video this caused X to crash (as sometimes happens). I restarted and suddenly the sound doesn't work. I have a Intel HDA/Realtek ALC889 00:1b.0 Audio device: Intel Corporation 82801JI (ICH10 Family) HD Audio Controller alsamixer is detecting this just fine. PulseAudio doesn't detect this alsa device so is using auto_null as the default sink (logs below). When I properly kill PulseAudio (tell it not to auto-start) direct ALSA communication with the sound card works just fine. speaker-test, for example, works. So the hardware and ALSA layers are fine IMO. In the logs, it seems that the card might be "busy" but I really don't know how or why it would be now (and never before). Is there an ALSA lock file somewhere that it still there because of my crash? I just ran sudo fuser /dev/snd/* and saw this: oli@bert:~$ sudo fuser /dev/snd/* /dev/snd/controlC0: 1884 /dev/snd/pcmC0D0c: 1884m /dev/snd/timer: 1884 A look at the process list (ps aux | grep 1884) tells me process 1884 is arecord -c 1 -f S16_LE -r 8000 -t raw. No idea what this is or why it's running. When I try and kill arecord (as root), it just respawns and rebinds on the hardware. I'm in a very annoying situation where I don't know what is going on and don't know how to find out. I'm open to all suggestions to get this working again. Fire away. And here's what I get when I stop PA auto-loading, kill it and then start it with -vvvv. oli@bert:~$ pulseaudio -vvvvv I: main.c: setrlimit(RLIMIT_NICE, (31, 31)) failed: Operation not permitted I: main.c: setrlimit(RLIMIT_RTPRIO, (9, 9)) failed: Operation not permitted D: core-rtclock.c: Timer slack is set to 50 us. D: core-util.c: RealtimeKit worked. I: core-util.c: Successfully gained nice level -11. I: main.c: This is PulseAudio 0.9.21-63-gd3efa-dirty D: main.c: Compilation host: x86_64-pc-linux-gnu D: main.c: Compilation CFLAGS: -g -O2 -g -Wall -O3 -Wall -W -Wextra -pipe -Wno-long-long -Winline -Wvla -Wno-overlength-strings -Wunsafe-loop-optimizations -Wundef -Wformat=2 -Wlogical-op -Wsign-compare -Wformat-security -Wmissing-include-dirs -Wformat-nonliteral -Wold-style-definition -Wpointer-arith -Winit-self -Wdeclaration-after-statement -Wfloat-equal -Wmissing-prototypes -Wstrict-prototypes -Wredundant-decls -Wmissing-declarations -Wmissing-noreturn -Wshadow -Wendif-labels -Wcast-align -Wstrict-aliasing=2 -Wwrite-strings -Wno-unused-parameter -ffast-math -Wp,-D_FORTIFY_SOURCE=2 -fno-common -fdiagnostics-show-option D: main.c: Running on host: Linux x86_64 2.6.38-rc3 #1 SMP Tue Feb 1 10:53:04 GMT 2011 D: main.c: Found 8 CPUs. I: main.c: Page size is 4096 bytes D: main.c: Compiled with Valgrind support: no D: main.c: Running in valgrind mode: no D: main.c: Running in VM: no D: main.c: Optimised build: yes D: main.c: All asserts enabled. I: main.c: Machine ID is 8310740c4729ef474fe5ecec4bbf5a6b. I: main.c: Session ID is 8310740c4729ef474fe5ecec4bbf5a6b-1297338553.571075-1050119523. I: main.c: Using runtime directory /home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-runtime. I: main.c: Using state directory /home/oli/.pulse. I: main.c: Using modules directory /usr/lib/pulse-0.9.21/modules. I: main.c: Running in system mode: no I: main.c: Fresh high-resolution timers available! Enjoy ol' chap! I: cpu-x86.c: CPU flags: CMOV MMX SSE SSE2 SSE3 SSSE3 SSE4_1 SSE4_2 I: svolume_mmx.c: Initialising MMX optimized functions. I: remap_mmx.c: Initialising MMX optimized remappers. I: svolume_sse.c: Initialising SSE2 optimized functions. I: remap_sse.c: Initialising SSE2 optimized remappers. I: sconv_sse.c: Initialising SSE2 optimized conversions. D: memblock.c: Using shared memory pool with 1024 slots of size 64.0 KiB each, total size is 64.0 MiB, maximum usable slot size is 65472 D: database-tdb.c: Opened TDB database '/home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-device-volumes.tdb' I: module-device-restore.c: Sucessfully opened database file '/home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-device-volumes'. I: module.c: Loaded "module-device-restore" (index: #0; argument: ""). D: database-tdb.c: Opened TDB database '/home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-stream-volumes.tdb' I: module-stream-restore.c: Sucessfully opened database file '/home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-stream-volumes'. I: module.c: Loaded "module-stream-restore" (index: #1; argument: ""). D: database-tdb.c: Opened TDB database '/home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-card-database.tdb' I: module-card-restore.c: Sucessfully opened database file '/home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-card-database'. I: module.c: Loaded "module-card-restore" (index: #2; argument: ""). I: module.c: Loaded "module-augment-properties" (index: #3; argument: ""). D: cli-command.c: Checking for existance of '/usr/lib/pulse-0.9.21/modules/module-udev-detect.so': success D: module-udev-detect.c: /dev/snd/controlC0 is accessible: yes D: module-udev-detect.c: /devices/pci0000:00/0000:00:1b.0/sound/card0 is busy: yes I: module-udev-detect.c: Found 1 cards. I: module.c: Loaded "module-udev-detect" (index: #4; argument: ""). D: cli-command.c: Checking for existance of '/usr/lib/pulse-0.9.21/modules/module-bluetooth-discover.so': success D: dbus-util.c: Successfully connected to D-Bus system bus ba7c9a1f90b3d49d930bca2100000015 as :1.62 D: bluetooth-util.c: dbus: interface=org.freedesktop.DBus, path=/org/freedesktop/DBus, member=NameAcquired D: bluetooth-util.c: Bluetooth daemon is apparently not available. I: module.c: Loaded "module-bluetooth-discover" (index: #5; argument: ""). D: cli-command.c: Checking for existance of '/usr/lib/pulse-0.9.21/modules/module-esound-protocol-unix.so': success I: module.c: Loaded "module-esound-protocol-unix" (index: #6; argument: ""). I: module.c: Loaded "module-native-protocol-unix" (index: #7; argument: ""). D: cli-command.c: Checking for existance of '/usr/lib/pulse-0.9.21/modules/module-gconf.so': success I: module.c: Loaded "module-gconf" (index: #8; argument: ""). I: module-default-device-restore.c: Saved default sink 'auto_null' not existant, not restoring default sink setting. I: module-default-device-restore.c: Saved default source 'auto_null.monitor' not existant, not restoring default source setting. I: module.c: Loaded "module-default-device-restore" (index: #9; argument: ""). I: module.c: Loaded "module-rescue-streams" (index: #10; argument: ""). D: module-always-sink.c: Autoloading null-sink as no other sinks detected. I: sink.c: Created sink 0 "auto_null" with sample spec s16le 6ch 44100Hz and channel map front-left,front-left-of-center,front-center,front-right,front-right-of-center,rear-center I: sink.c: device.description = "Dummy Output" I: sink.c: device.class = "abstract" I: sink.c: device.icon_name = "audio-card" D: core-subscribe.c: Dropped redundant event due to change event. I: source.c: Created source 0 "auto_null.monitor" with sample spec s16le 6ch 44100Hz and channel map front-left,front-left-of-center,front-center,front-right,front-right-of-center,rear-center I: source.c: device.description = "Monitor of Dummy Output" I: source.c: device.class = "monitor" I: source.c: device.icon_name = "audio-input-microphone" D: module-null-sink.c: Thread starting up I: module.c: Loaded "module-null-sink" (index: #11; argument: "sink_name=auto_null sink_properties='device.description="Dummy Output"'"). I: module.c: Loaded "module-always-sink" (index: #12; argument: ""). I: module.c: Loaded "module-intended-roles" (index: #13; argument: ""). D: module-suspend-on-idle.c: Sink auto_null becomes idle, timeout in 5 seconds. I: module.c: Loaded "module-suspend-on-idle" (index: #14; argument: ""). I: client.c: Created 0 "ConsoleKit Session /org/freedesktop/ConsoleKit/Session1" D: module-console-kit.c: Added new session /org/freedesktop/ConsoleKit/Session1 I: module.c: Loaded "module-console-kit" (index: #15; argument: ""). I: module.c: Loaded "module-position-event-sounds" (index: #16; argument: ""). D: dbus-util.c: Successfully connected to D-Bus session bus efbffc6788fad56cfd64d40c00000018 as :1.182 D: main.c: Got org.pulseaudio.Server! I: main.c: Daemon startup complete. I: client.c: Created 1 "Native client (UNIX socket client)" I: client.c: Created 2 "Native client (UNIX socket client)" D: protocol-native.c: Protocol version: remote 16, local 16 I: protocol-native.c: Got credentials: uid=1000 gid=1000 success=1 D: protocol-native.c: SHM possible: yes D: protocol-native.c: Negotiated SHM: yes D: protocol-native.c: Protocol version: remote 16, local 16 I: protocol-native.c: Got credentials: uid=1000 gid=1000 success=1 D: protocol-native.c: SHM possible: yes D: protocol-native.c: Negotiated SHM: yes D: module-augment-properties.c: Looking for .desktop file for gnome-volume-control-applet D: module-augment-properties.c: Looking for .desktop file for gnome-settings-daemon D: core-subscribe.c: Dropped redundant event due to change event. I: module-suspend-on-idle.c: Sink auto_null idle for too long, suspending ... D: sink.c: Suspend cause of sink auto_null is 0x0004, suspending Note the one section that seems to find the hardware but says it's busy (no idea if this is relevant). D: cli-command.c: Checking for existance of '/usr/lib/pulse-0.9.21/modules/module-udev-detect.so': success D: module-udev-detect.c: /dev/snd/controlC0 is accessible: yes D: module-udev-detect.c: /devices/pci0000:00/0000:00:1b.0/sound/card0 is busy: yes I: module-udev-detect.c: Found 1 cards.

    Read the article

  • Process not Listed by PS or in /proc/

    - by Hammer Bro.
    I'm trying to figure out how to operate a rather large Java program, 'prog'. If I go to its /bin/ dir and configure its setenv.sh and prog.sh to use local directories and my current user account. Then I try to run it via "./prog.sh start". Here are all the relevant bits of prog.sh: USER=(my current account) _CMD="/opt/jdk/bin/java -server -Xmx768m -classpath "${CLASSPATH}" -jar "${DIR}/prog.jar"" case "${ACTION}" in start) nohup su ${USER} -c "exec ${_CMD} >>${_LOGFILE} 2>&1" >/dev/null & echo $! >${_PID} echo "Prog running. PID="`cat ${_PID}` ;; stop) PID=`cat ${_PID} 2>/dev/null` echo "Shutting down prog: ${PID} kill -QUIT ${PID} 2>/dev/null kill ${PID} 2>/dev/null kill -KILL ${PID} 2>/dev/null rm -f ${_PID} echo "STOPPED `date`" >>${_LOGFILE} ;; When I actually do ./prog.sh start, it starts. But I can't find it at all on the process list. Nor can I kill it manually, using the same command the shell script uses. But I can tell it's running, because if I do ./prog.sh stop, it stops (and some temporary files elsewhere clean themselves out). ./prog.sh start Prog running. PID=1234 ps eaux | grep 1234 ps eaux | grep -i prog.jar ps eaux >> pslist.txt (It's not there either by PID or any clear name I can find: prog, java or jar.) cd /proc/1234/ -bash: cd: /proc/1234/: No such file or directory kill -QUIT 1234 kill 1234 kill -KILL 1234 -bash: kill: (1234) - No such process ./prog.sh stop Shutting down prog: 1234 As far as I can tell, the process is running yet not in any way listed by the system. I can't find it in ps or /proc/, nor can I kill it. But the shell script can still stop it properly. So my question is, how can something like this happen? Is the process supremely hidden, actually unlisted, or am I just missing it in some fashion? I'm trying to figure out what makes this program tick, and I can barely prove that it's ticking! Edit: ps eu | grep prog.sh (after having restarted; so random PID) 50038 19381 0.0 0.0 4412 632 pts/3 S+ 16:09 0:00 grep prog.sh HOSTNAME=machine.server.com TERM=vt100 SHELL=/bin/bash HISTSIZE=1000 SSH_CLIENT=::[STUFF] 1754 22 CVSROOT=:[DIR] SSH_TTY=/dev/pts/3 ANT_HOME=/opt/apache-ant-1.7.1 USER=[USER] LS_COLORS=[COLORS] SSH_AUTH_SOCK=[DIR] KDEDIR=/usr MAIL=[DIR] PATH=[DIRS] INPUTRC=/etc/inputrc PWD=[PWD] JAVA_HOME=/opt/jdk1.6.0_21 LANG=en_US.UTF-8 SSH_ASKPASS=/usr/libexec/openssh/gnome-ssh-askpass M2_HOME=/opt/apache-maven-2.2.1 SHLVL=1 HOME=[~] LOGNAME=[USER] SSH_CONNECTION=::[STUFF] LESSOPEN=|/usr/bin/lesspipe.sh %s G_BROKEN_FILENAMES=1 _=/bin/grep OLDPWD=[DIR] I just realized that the stop) part of prog.sh isn't actually a guarantee that the process it claims to be stopping is running -- it just tries to kill the PID and suppresses all output then deletes the temporary file and manually inserts STOPPED into the log file. So I'm no longer so certain that the process is always running when I ps for it, although the code sample above indicates that it at least runs erratically. I'll continue looking into this undocumented behemoth when I return to work tomorrow.

    Read the article

  • 2 drives, slow software RAID1 (md)

    - by bart613
    Hello, I've got a server from hetzner.de (EQ4) with 2* SAMSUNG HD753LJ drives (750G 32MB cache). OS is CentOS 5 (x86_64). Drives are combined together into two RAID1 partitions: /dev/md0 which is 512MB big and has only /boot partitions /dev/md1 which is over 700GB big and is one big LVM which hosts other partitions Now, I've been running some benchmarks and it seems like even though exactly the same drives, speed differs a bit on each of them. # hdparm -tT /dev/sda /dev/sda: Timing cached reads: 25612 MB in 1.99 seconds = 12860.70 MB/sec Timing buffered disk reads: 352 MB in 3.01 seconds = 116.80 MB/sec # hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 25524 MB in 1.99 seconds = 12815.99 MB/sec Timing buffered disk reads: 342 MB in 3.01 seconds = 113.64 MB/sec Also, when I run eg. pgbench which is stressing IO quite heavily, I can see following from iostat output: Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 231.40 0.00 298.00 0.00 9683.20 32.49 0.17 0.58 0.34 10.24 sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda2 0.00 231.40 0.00 298.00 0.00 9683.20 32.49 0.17 0.58 0.34 10.24 sdb 0.00 231.40 0.00 301.80 0.00 9740.80 32.28 14.19 51.17 3.10 93.68 sdb1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb2 0.00 231.40 0.00 301.80 0.00 9740.80 32.28 14.19 51.17 3.10 93.68 md1 0.00 0.00 0.00 529.60 0.00 9692.80 18.30 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-0 0.00 0.00 0.00 0.60 0.00 4.80 8.00 0.00 0.00 0.00 0.00 dm-1 0.00 0.00 0.00 529.00 0.00 9688.00 18.31 24.51 49.91 1.81 95.92 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 152.40 0.00 330.60 0.00 5176.00 15.66 0.19 0.57 0.19 6.24 sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda2 0.00 152.40 0.00 330.60 0.00 5176.00 15.66 0.19 0.57 0.19 6.24 sdb 0.00 152.40 0.00 326.20 0.00 5118.40 15.69 19.96 55.36 3.01 98.16 sdb1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb2 0.00 152.40 0.00 326.20 0.00 5118.40 15.69 19.96 55.36 3.01 98.16 md1 0.00 0.00 0.00 482.80 0.00 5166.40 10.70 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-1 0.00 0.00 0.00 482.80 0.00 5166.40 10.70 30.19 56.92 2.05 99.04 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 181.64 0.00 324.55 0.00 5445.11 16.78 0.15 0.45 0.21 6.87 sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda2 0.00 181.64 0.00 324.55 0.00 5445.11 16.78 0.15 0.45 0.21 6.87 sdb 0.00 181.84 0.00 328.54 0.00 5493.01 16.72 18.34 61.57 3.01 99.00 sdb1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb2 0.00 181.84 0.00 328.54 0.00 5493.01 16.72 18.34 61.57 3.01 99.00 md1 0.00 0.00 0.00 506.39 0.00 5477.05 10.82 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-1 0.00 0.00 0.00 506.39 0.00 5477.05 10.82 28.77 62.15 1.96 99.00 And this is completely getting me confused. How come two exactly the same specced drives have such a difference in write speed (see util%)? I haven't really paid attention to those speeds before, so perhaps that something normal -- if someone could confirm I would be really grateful. Otherwise, if someone have seen such behavior again or knows what is causing such behavior I would really appreciate answer. I'll also add that both "smartctl -a" and "hdparm -I" output are exactly the same and are not indicating any hardware problems. The slower drive was changed already two times (to new ones). Also I asked to change the drives with places, and then sda were slower and sdb quicker (so the slow one was the same drive). SATA cables were changed two times already.

    Read the article

  • USB packets - receive wrong data

    - by regorianer
    i have a little python script which shows me the packets of an enocean device and does some events depending on the packet type. unfortunately it doesn't work because i'm getting wrong packets. Parts of the python script (used pySerial): Blockquote ser = serial.Serial('/dev/ttyUSB1',57600,bytesize = serial.EIGHTBITS,timeout = 1, parity = serial.PARITY_NONE , rtscts = 0) print 'clearing buffer' s = ser.read(10000) print 'start read' while 1: s = ser.read(1) for character in s: sys.stdout.write(" %s" % character.encode('hex')) print 'end' ser.close() output baudrate 57600: e0 e0 00 e0 00 e0 e0 e0 e0 e0 00 e0 e0 00 00 00 00 00 00 00 e0 e0 e0 00 00 00 00 e0 e0 e0 00 00 e0 e0 e0 e0 e0 00 e0 00 e0 e0 e0 e0 e0 00 e0 e0 00 00 00 00 00 00 e0 e0 e0 00 00 00 00 e0 e0 e0 00 00 e0 e0 e0 output baudrate 9600: a5 5a 0b 05 10 00 00 00 00 15 c4 56 20 6f a5 5a 0b 05 00 00 00 00 00 15 c4 56 20 5f linux terminal baudrate 57600: $stty -F /dev/ttyUSB1 57600 $stty < /dev/ttyUSB1 speed 57600 baud; line = 0; eof = ^A; min = 0; time = 0; -brkint -icrnl -imaxbel -opost -onlcr -isig -icanon -iexten -echo -echoe -echok -echoctl -echoke $while (true) do cat -A /dev/ttyUSB1 ; done myfile $hexdump -C myfile 00000000 4d 2d 60 4d 2d 60 5e 40 4d 2d 60 5e 40 4d 2d 60 |M-M-^@M-^@M-| 00000010 4d 2d 60 4d 2d 60 4d 2d 60 4d 2d 60 5e 40 4d 2d |M-M-M-M-^@M-| 00000020 60 4d 2d 60 5e 40 5e 40 5e 40 5e 40 5e 40 5e 40 |M-^@^@^@^@^@^@| 00000030 5e 40 4d 2d 60 4d 2d 60 4d 2d 60 5e 40 5e 40 5e |^@M-M-M-`^@^@^| 00000040 40 5e 40 4d 2d 60 4d 2d 60 4d 2d 60 |@^@M-M-M-`| 0000004c linux terminal baudrate 9600: $hexdump -C myfile2 00000000 5e 40 5e 55 4d 2d 44 56 30 4d 2d 3f 5e 40 5e 40 |^@^UM-DV0M-?^@^@| 00000010 5e 55 4d 2d 44 56 20 5f |^UM-DV _| 00000018 the specification says: 0x55 sync byte 1st 0xNNNN data length bytes (2 bytes) 0x07 opt length byte 0x01 type byte CRC, data, opt data und nochmal CRC but I'm not getting this packet structure. The output of the python script differs from the one I get via the terminal. I also wrote the python part with C, but the output is the same as with python As the USB receiver a BSC-BoR USB Receiver/Sender is used The EnOcean device is a simple button

    Read the article

  • Oracle Linux 6 DVDs Now Available

    - by sergio.leunissen
    On Sunday 6 February 2011, Oracle Linux 6 was released on the Unbreakable Linux Network for customers with an Oracle Linux support subscription. Shortly after that, the Oracle Linux 6 RPMs were made available on our public yum server. Today we published the installation DVD images on edelivery.oracle.com/linux. Oracle Linux 6 is free to download, install and use. The full release notes are here, but similar to my recent post about Oracle Linux 5.6, I wanted to highlight a few items about this release. Unbreakable Enterprise Kernel As is the case with Oracle Linux 5.6, the default installed kernel on x86_64 platform in Oracle Linux 6 is the Unbreakable Enterprise Kernel. If you haven't already, I highly recommend you watch the replay of this webcast by Chris Mason on the performance improvements made in this kernel. # uname -r 2.6.32-100.28.5.el6.x86_64 The Unbreakable Enterprise Kernel is delivered via the package kernel-uek: [root@localhost ~]# yum info kernel-uek ... Installed Packages Name : kernel-uek Arch : x86_64 Version : 2.6.32 Release : 100.28.5.el6 Size : 84 M Repo : installed From repo : anaconda-OracleLinuxServer-201102031546.x86_64 Summary : The Linux kernel URL : http://www.kernel.org/ License : GPLv2 Description: The kernel package contains the Linux kernel (vmlinuz), the core of : any Linux operating system. The kernel handles the basic functions : of the operating system: memory allocation, process allocation, : device input and output, etc. ext4 file system The ext4 or fourth extended filesystem replaces ext3 as the default filesystem in Oracle Linux 6. # mount /dev/mapper/VolGroup-lv_root on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0") /dev/sda1 on /boot type ext4 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) Red Hat compatible kernel Oracle Linux 6 also includes a Red Hat compatible kernel built directly from RHEL source. It's already installed, so booting it is a matter of editing /etc/grub.conf # rpm -qa | grep kernel-2.6.32 kernel-2.6.32-71.el6.x86_64 Oracle Linux 6 no longer includes a Red Hat compatible kernel with Oracle bug fixes. The only Red Hat compatible kernel included is the one built directly from RHEL source. Yum-only access to Unbreakable Linux Network (ULN) Oracle Linux 6 uses yum exclusively for access to Unbreakable Linux Network. To register your system with ULN, use the following command: # uln_register No Itanium Support Oracle Linux 6 is not supported on the Itanium (ia64) platform. Next Steps Read the release notes Download Oracle Linux 6 for free Discuss on the Oracle Linux forum

    Read the article

  • How to use Hybrid Graphic Switch on Sony Vaio Z?

    - by Travis R
    I got it to install nicely and it's all working, but I don't know which graphics card is being used nor how to switch between. I tried installing the official Nvidia drivers, but then I could not boot up my computer afterwards so I have not installed them again after doing a reinstall of Ubuntu. PS, if you have a grub install failure during install, the key is to tell it where to install the bootloader at the very beginning of the installation, on your partition selection screen (choose dev/mapper, not the /dev/sda it defaults to).

    Read the article

  • Linux-Containers — Part 1: Overview

    - by Lenz Grimmer
    "Containers" by Jean-Pierre Martineau (CC BY-NC-SA 2.0). Linux Containers (LXC) provide a means to isolate individual services or applications as well as of a complete Linux operating system from other services running on the same host. To accomplish this, each container gets its own directory structure, network devices, IP addresses and process table. The processes running in other containers or the host system are not visible from inside a container. Additionally, Linux Containers allow for fine granular control of resources like RAM, CPU or disk I/O. Generally speaking, Linux Containers use a completely different approach than "classicial" virtualization technologies like KVM or Xen (on which Oracle VM Server for x86 is based on). An application running inside a container will be executed directly on the operating system kernel of the host system, shielded from all other running processes in a sandbox-like environment. This allows a very direct and fair distribution of CPU and I/O-resources. Linux containers can offer the best possible performance and several possibilities for managing and sharing the resources available. Similar to Containers (or Zones) on Oracle Solaris or FreeBSD jails, the same kernel version runs on the host as well as in the containers; it is not possible to run different Linux kernel versions or other operating systems like Microsoft Windows or Oracle Solaris for x86 inside a container. However, it is possible to run different Linux distribution versions (e.g. Fedora Linux in a container on top of an Oracle Linux host), provided it supports the version of the Linux kernel that runs on the host. This approach has one caveat, though - if any of the containers causes a kernel crash, it will bring down all other containers (and the host system) as well. For example, Oracle's Unbreakable Enterprise Kernel Release 2 (2.6.39) is supported for both Oracle Linux 5 and 6. This makes it possible to run Oracle Linux 5 and 6 container instances on top of an Oracle Linux 6 system. Since Linux Containers are fully implemented on the OS level (the Linux kernel), they can be easily combined with other virtualization technologies. It's certainly possible to set up Linux containers within a virtualized Linux instance that runs inside Oracle VM Server for Oracle VM Virtualbox. Some use cases for Linux Containers include: Consolidation of multiple separate Linux systems on one server: instances of Linux systems that are not performance-critical or only see sporadic use (e.g. a fax or print server or intranet services) do not necessarily need a dedicated server for their operations. These can easily be consolidated to run inside containers on a single server, to preserve energy and rack space. Running multiple instances of an application in parallel, e.g. for different users or customers. Each user receives his "own" application instance, with a defined level of service/performance. This prevents that one user's application could hog the entire system and ensures, that each user only has access to his own data set. It also helps to save main memory — if multiple instances of a same process are running, the Linux kernel can share memory pages that are identical and unchanged across all application instances. This also applies to shared libraries that applications may use, they are generally held in memory once and mapped to multiple processes. Quickly creating sandbox environments for development and testing purposes: containers that have been created and configured once can be archived as templates and can be duplicated (cloned) instantly on demand. After finishing the activity, the clone can safely be discarded. This allows to provide repeatable software builds and test environments, because the system will always be reset to its initial state for each run. Linux Containers also boot significantly faster than "classic" virtual machines, which can save a lot of time when running frequent build or test runs on applications. Safe execution of an individual application: if an application running inside a container has been compromised because of a security vulnerability, the host system and other containers remain unaffected. The potential damage can be minimized, analyzed and resolved directly from the host system. Note: Linux Containers on Oracle Linux 6 with the Unbreakable Enterprise Kernel Release 2 (2.6.39) are still marked as Technology Preview - their use is only recommended for testing and evaluation purposes. The Open-Source project "Linux Containers" (LXC) is driving the development of the technology behind this, which is based on the "Control Groups" (CGroups) and "Name Spaces" functionality of the Linux kernel. Oracle is actively involved in the Linux Containers development and contributes patches to the upstream LXC code base. Control Groups provide means to manage and monitor the allocation of resources for individual processes or process groups. Among other things, you can restrict the maximum amount of memory, CPU cycles as well as the disk and network throughput (in MB/s or IOP/s) that are available for an application. Name Spaces help to isolate process groups from each other, e.g. the visibility of other running processes or the exclusive access to a network device. It's also possible to restrict a process group's access and visibility of the entire file system hierarchy (similar to a classic "chroot" environment). CGroups and Name Spaces provide the foundation on which Linux containers are based on, but they can actually be used independently as well. A more detailed description of how Linux Containers can be created and managed on Oracle Linux will be explained in the second part of this article. Additional links related to Linux Containers: OTN Article: The Role of Oracle Solaris Zones and Linux Containers in a Virtualization Strategy Linux Containers on Wikipedia - Lenz Grimmer Follow me on: Personal Blog | Facebook | Twitter | Linux Blog |

    Read the article

  • Problems with making a bootable USB Drive on a Mac

    - by Jason
    I am following this http://www.ubuntu.com/download/help/create-a-usb-stick-on-mac-osx to create a Bootable USB Drive for me Mac and this is the output I get on step 8. new-host:~ Jason$ sudo dd if=/Users/Jason/Desktop/ubuntu-12.04-desktop-amd64.dmg of=/dev/rdisk1s2 bs=1m dd: /dev/rdisk1s2: Invalid argument 691+1 records in 691+0 records out 724566016 bytes transferred in 164.544659 secs (4403461 bytes/sec) What should I do?

    Read the article

  • Deploying an ADF Secure Application using WLS Console

    - by juan.ruiz
    Last week I worked on a requirement from a customer that wanted to understand how to deploy to WLS an application with ADF Security without using JDeveloper. The main question was, what steps where needed in order to set up Enterprise Roles, Security Policies and Application Credentials. In this entry I will explain the steps taken using JDeveloper 11.1.1.2. 0 Requirements: Instead of building a sample application from scratch, we can use Andrejus 's sample application that contains all the security pieces that we need. Open and migrate the project. Also make sure you adjust the database settings accordingly. Creating the EAR file Review the Security settings of the application by going into the Application -> Secure menu and see that there are two enterprise roles as well as the ADF Policies enforcing security on the main page. Make sure the Application Module uses the Data Source instead of JDBC URL for its connection type, also take note of the data source name - in my case I have: java:comp/env/jdbc/HrDS To facilitate the access to this application once we deploy it. Go to your ViewController project properties select the Java EE Application category and give it a meaningful name to the context root as well to the Application Name Go to the ADFSecurityWL Application properties -> Deployment  and create a new EAR deployment profile. Uncheck the Auto generate and Synchronize weblogic-jdbc.xml Descriptors During Deployment Deploy the application as an EAR file. Deploying the Application to WLS using the WLS Console On the WLS console create a JNDI data source. This is the part that I found more tricky of the hole exercise given that the name should match the AM's data source name, however the naming convention that worked for me was jdbc.HrDS Now, deploy the application manually by selecting deployments ->Install look for the EAR and follow the default steps. If this is the firs time you deploy the application, once the deployment finishes you will be asked to Activate Changes on the domain, these changes contain all the security policies and application roles insertion into the WLS instance. Creating Roles and User Groups for the Application To finish the after-deployment set up, we need to create the groups that are the equivalent of the Enterprise Roles of ADF Security. For our sample we have two Enterprise Roles employeesApplication and managersApplication. After that, we create the application users and assign them into their respective groups. Now we can run the application and test the security constraints

    Read the article

  • INETA Community Leadership Summit

    - by Scott Spradlin
    INETA Community Leadership Summit will be taking place on Sunday June 6th at 1PM at Tech·Ed North America in New Orleans. INETA is hosting a free Community Leadership Summit in New Orleans at the Ernest N. Morial Convention Center on Sunday June 6th at 1:00 PM prior to the start of Tech·Ed 2010. The summit is open to Community Leaders from the area, as well as those attending Tech·Ed from across the country and around the world. It is an excellent opportunity for exchanging information and ideas. If you are a user group leader, or are involved in the leadership, planning, promotion, or day-to-day operations of a user group community, this event is for YOU! The summit is an open forum to share ideas, discuss common challenges, and gain from the experience of other leaders. INETA Community Leadership summits are part of an ongoing effort by INETA to create, improve and share resources designed to strengthen individual user groups and the community. This meeting will be the perfect opportunity to meet leaders from other groups, benefit from their success stories, and expand your network of contacts.   Quick FAQs Who can attend? Any leader or volunteer of any INETA User Group. Do I need to be attending Tech·Ed? No, you do NOT need to purchase a pass for Tech·Ed to attend the Leadership Summit. What does it cost to attend? There is NO cost to attend summit, but the knowledge that will be available about User Groups will be priceless. I want to help out, who do I contact? Send an email to [email protected] if you are interested. I want to attend, where do I register? We are putting together a registration link now, it will be published in a future newsletter and on the website. What will the format of the summit be? The summit will be like our Birds of a Feather Sessions but focused on User Group topics. Moderators will be armed with some broad topics to kick off the conversation, however the real value of these sessions is getting the chance to learn from each other. What topics will be covered? We are thinking of focusing on 4 areas: Running a User Group, Effective Content and Presenters, User Group Promotion and Developing Partnerships. However the agenda is yours! If there is a topic you want to see covered, or a topic that you would like to lead then email  [email protected]. Technorati Tags: conference

    Read the article

  • How to correctly partition usb flash drive and which filesystem to choose considering wear leveling?

    - by random1
    Two problems. First one: how to partition the flash drive? I shouldn't need to do this, but I'm no longer sure if my partition is properly aligned since I was forced to delete and create a new partition table after gparted complained when I tried to format the drive from FAT to ext4. The naive answer would be to say "just use default and everything is going to be alright". However if you read the following links you'll know things are not that simple: https://lwn.net/Articles/428584/ and http://linux-howto-guide.blogspot.com/2009/10/increase-usb-flash-drive-write-speed.html Then there is also the issue of cylinders, heads and sectors. Currently I get this: $sfdisk -l -uM /dev/sdd Disk /dev/sdd: 30147 cylinders, 64 heads, 32 sectors/track Warning: The partition table looks like it was made for C/H/S=*/255/63 (instead of 30147/64/32). For this listing I'll assume that geometry. Units = mebibytes of 1048576 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End MiB #blocks Id System /dev/sdd1 1 30146 30146 30869504 83 Linux $fdisk -l /dev/sdd Disk /dev/sdd: 31.6 GB, 31611420672 bytes 255 heads, 63 sectors/track, 3843 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00010c28 So from my current understanding I should align partitions at 4 MiB (currently it's at 1 MiB). But I still don't know how to set the heads and sectors properly for my device. Second problem: file system. From the benchmarks I saw ext4 provides the best performance, however there is the issue of wear leveling. How can I know that my Transcend JetFlash 700's microcontroller provides for wear leveling? Or will I just be killing my drive faster? I've seen a lot of posts on the web saying don't worry the newer drives already take care of that. But I've never seen a single piece of backed evidence of that and at some point people start mixing SSD with USB flash drives technology. The safe option would be to go for ext2, however a serious of tests that I performed showed horrible performance!!! These values are from a real scenario and not some synthetic test: 42 files: 3,429,415,284 bytes copied to flash drive original fat32: 15.1 MiB/s ext4 after new partition table: 10.2 MiB/s ext2 after new partition table: 1.9 MiB/s Please read the links that I posted above before answering. I would also be interested in answers backed up with some references because a lot is said and re-said but then it lacks facts. Thank you for the help.

    Read the article

  • What's going on with INETA and the Regional Speakers Bureau?

    - by Chris Williams
    For those of you that have been waiting patiently (and not so patiently) I'm happy to say that we're very near completion on some changes/enhancements/improvements that will allow us to finally go live with the INETA Regional Speakers Bureau. I know quite a few of you have already registered, which is great (though some of you may need to come back and update your info) and we've had a few folks submit requests, mostly in a test capacity, but soon we'll be up and live. Here's how it breaks down. Be sure to read this, because things have changed a bit from when we initially announced it. 1. The majority of our speaker/event funding is going into the Regional Speakers Bureau.  The National Bureau still exists, but it's a good bit smaller than it was before, and it's not an "every group" benefit anymore. We'll be using the National Bureau as more of a strategic task force, targeting high impact events and areas that need some community building love from INETA. These will be identified and handled on a case by case basis, and may include more than just user group events. 2. You're going to get more events per group, per year than you did before. Not only are we focusing more resources on this program, but we're also making a lot of efforts to use it more effectively. With the INETA Regional Speakers Bureau, you should be able to get 2-3 INETA speakers per year, on average. Not every geographical area will have exactly the same experience, but we're doing the best we can. 3. It's not a farm team program for the National Bureau. Unsurprisingly, I managed to offend a number of people when I previously made the comment that the Regional Speakers Bureau program was a farm team or stepping stone to the National Bureau. It was a poor choice of words.  Anyone can participate in the Regional Speakers Bureau, and I look forward to working with all of you. 4. There is assistance for your efforts. The exact final details are still being hammered out, but expect it to look something like this: (all distances listed are based on a round trip) Distances < 120 miles = $0 121 miles - 240 miles = $50 (effectively 1 to 2 hours, each way) 241 miles - 360 miles = $100 (effectively 2 to 3 hours, each way) 361 miles - 480 miles = $200 (effectively 3 to 4 hours, each way) For those of you who travel a lot, we're working on a solution to handle group visits when you're away from home. These will (for now) be handled on a case by case basis. 5. We're going to make it as easy as possible to work with the program. In order to do this, we need a few things from you. For speakers, that means your home address. It also means (maybe) filling out a simple 1 line expense report via the INETA website. For user groups, it means making sure your meeting address is up to date as well. 6. Distances will be automatically calculated from your home of record to the user group event and back. We realize that this is not a perfect solution to every instance, but we're not paying you to speak at an event, and you won't be taxed on this money. It's simply some assistance to make your community efforts easier. Our way of saying thanks for everything you do. 7. Sounds good so far, what's the catch? There's always a catch, right? In this case there are two of them: 1) At this time, Microsoft employees are welcome to use the website to line up speaking engagements with user groups, but are not eligible for financial assistance. 2) Anyone can register and use the website to line up speaking engagements with user groups, however you must receive and maintain a net score of 3+ positive ratings (we're implementing a thumbs up / thumbs down system) in order to receive financial assistance. These ratings are provided by the User Group leaders after the meeting has taken place. 8. Involvement by the User Group leaders is a key factor in the success of this program. Your job isn't done once you request a speaker. After you've had your meeting, it's critical that you go back to the website and take a very small survey. Doing this ensures that the speaker gets rated (and compensated if eligible) and also ensures that you can make another request, since you won't be able to make a new request if you have an old one outstanding. 9. What about Canada? We're definitely working on that. Unfortunately nothing new to report on that front, other than to say that we're trying. So... this is where things stand currently. We're working very quickly to get this in place and get speakers and groups together. If you have any questions, please leave a comment below and I'll answer them as quickly as possible. If I've forgotten anything, or if things change, I'll update it here. Thanks, Chris G. Williams INETA Board of Directors

    Read the article

  • USB drives not recognized all of a sudden (module usb_storage not loading)

    - by Siddharth
    I am very close to the solution, just need to know how to get usb-storage to load I have tried most of the advice on askubuntu and other sites, usb_storage enable to fdisk -l. But I am unable to find steps to get it working again. sudo lsusb results Bus.... skipped 4 lines Bus 004 Device 002: ID 413c:3012 Dell Computer Corp. Optical Wheel Mouse Bus 005 Device 002: ID 413c:2105 Dell Computer Corp. Model L100 Keyboard Bus 001 Device 005: ID 8564:1000 sudo dmseg | tail reports [ 69.567948] usb 1-4: USB disconnect, device number 4 [ 74.084041] usb 1-6: new high-speed USB device number 5 using ehci_hcd [ 74.240484] Initializing USB Mass Storage driver... [ 74.256033] scsi5 : usb-storage 1-6:1.0 [ 74.256145] usbcore: registered new interface driver usb-storage [ 74.256147] USB Mass Storage support registered. [ 74.257290] usbcore: deregistering interface driver usb-storage fdisk -l reports Device Boot Start End Blocks Id System /dev/sda1 * 2048 972656639 486327296 83 Linux /dev/sda2 972658686 976771071 2056193 5 Extended /dev/sda5 972658688 976771071 2056192 82 Linux swap / Solaris I think I need steps to install and get usb_storage module working. Edit : I tried sudo modprobe -v usb-storage reports sudo modprobe -v usb-storage insmod /lib/modules/3.2.0-48-generic-pae/kernel/drivers/usb/storage/usb-storage.ko Edit : jsiddharth@siddharth-desktop:~$ sudo udevadm monitor --udev monitor will print the received events for: UDEV - the event which udev sends out after rule processing UDEV [4757.144372] add /module/usb_storage (module) UDEV [4757.146558] remove /module/usb_storage (module) UDEV [4757.148707] add /devices/pci0000:00/0000:00:1d.7/usb1/1-6 (usb) UDEV [4757.149699] add /bus/usb/drivers/usb-storage (drivers) UDEV [4757.151214] remove /bus/usb/drivers/usb-storage (drivers) UDEV [4757.156873] add /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0 (usb) UDEV [4757.160903] add /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0/host9 (scsi) UDEV [4757.164672] add /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0/host9/scsi_host/host9 (scsi_host) UDEV [4757.165163] remove /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0/host9/scsi_host/host9 (scsi_host) UDEV [4757.165440] remove /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0/host9 (scsi) Narrowing down more : Seems like I need usb_storage to load as a module jsiddharth@siddharth-desktop:~$ lsmod | grep usb usbserial 37201 0 usbhid 41937 0 hid 77428 1 usbhid Still no usb driver mounted. Nor does a device show up in /dev. Any step by step process to debug and fix this will be really helpful.

    Read the article

  • USB drives not recognized all of a sudden (usb_storage not loaded lsmod does not report usb_storage)

    - by Siddharth
    I have tried most of the advice on askubuntu and other sites, usb_storage enable to fdisk -l. But I am unable to find steps to get it working again. sudo lsusb results Bus.... skipped 4 lines Bus 004 Device 002: ID 413c:3012 Dell Computer Corp. Optical Wheel Mouse Bus 005 Device 002: ID 413c:2105 Dell Computer Corp. Model L100 Keyboard Bus 001 Device 005: ID 8564:1000 sudo dmseg | tail reports [ 69.567948] usb 1-4: USB disconnect, device number 4 [ 74.084041] usb 1-6: new high-speed USB device number 5 using ehci_hcd [ 74.240484] Initializing USB Mass Storage driver... [ 74.256033] scsi5 : usb-storage 1-6:1.0 [ 74.256145] usbcore: registered new interface driver usb-storage [ 74.256147] USB Mass Storage support registered. [ 74.257290] usbcore: deregistering interface driver usb-storage fdisk -l reports Device Boot Start End Blocks Id System /dev/sda1 * 2048 972656639 486327296 83 Linux /dev/sda2 972658686 976771071 2056193 5 Extended /dev/sda5 972658688 976771071 2056192 82 Linux swap / Solaris I think I need steps to install and get usb_storage module working. Edit : I tried sudo modprobe -v usb-storage reports sudo modprobe -v usb-storage insmod /lib/modules/3.2.0-48-generic-pae/kernel/drivers/usb/storage/usb-storage.ko Edit : jsiddharth@siddharth-desktop:~$ sudo udevadm monitor --udev monitor will print the received events for: UDEV - the event which udev sends out after rule processing UDEV [4757.144372] add /module/usb_storage (module) UDEV [4757.146558] remove /module/usb_storage (module) UDEV [4757.148707] add /devices/pci0000:00/0000:00:1d.7/usb1/1-6 (usb) UDEV [4757.149699] add /bus/usb/drivers/usb-storage (drivers) UDEV [4757.151214] remove /bus/usb/drivers/usb-storage (drivers) UDEV [4757.156873] add /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0 (usb) UDEV [4757.160903] add /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0/host9 (scsi) UDEV [4757.164672] add /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0/host9/scsi_host/host9 (scsi_host) UDEV [4757.165163] remove /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0/host9/scsi_host/host9 (scsi_host) UDEV [4757.165440] remove /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0/host9 (scsi) Narrowing down more : Seems like I need usb_storage to load as a module jsiddharth@siddharth-desktop:~$ lsmod | grep usb usbserial 37201 0 usbhid 41937 0 hid 77428 1 usbhid Still no usb driver mounted. Nor does a device show up in /dev. Any step by step process to debug and fix this will be really helpful.

    Read the article

  • Puppet - Possible to use software design patterns in modules?

    - by Mike Purcell
    As I work with puppet, I find myself wanting to automate more complex setups, for example vhosts for X number of websites. As my puppet manifests get more complex I find it difficult to apply the DRY (don't repeat yourself) principle. Below is a simplified snippet of what I am after, but doesn't work because puppet throws various errors depending up whether I use classes or defines. I'd like to get some feed back from some seasoned puppetmasters on how they might approach this solution. # site.pp import 'nodes' # nodes.pp node nodes_dev { $service_env = 'dev' } node nodes_prod { $service_env = 'prod' } import 'nodes/dev' import 'nodes/prod' # nodes/dev.pp node 'service1.ownij.lan' inherits nodes_dev { httpd::vhost::package::site { 'foo': } httpd::vhost::package::site { 'bar': } } # modules/vhost/package.pp class httpd::vhost::package { class manage($port) { # More complex stuff goes here like ensuring that conf paths and uris exist # As well as log files, which is I why I want to do the work once and use many notify { $service_env: } notify { $port: } } define site { case $name { 'foo': { class 'httpd::vhost::package::manage': port => 20000 } } 'bar': { class 'httpd::vhost::package::manage': port => 20001 } } } } } That code snippet gives me a Duplicate declaration: Class[Httpd::Vhost::Package::Manage] error, and if I switch the manage class to a define, and attempt to access a global or pass in a variable common to both foo and bar, I get a Duplicate declaration: Notify[dev] error. Any suggestions how I can implement the DRY principle and still get puppet to work? -- UPDATE -- I'm still having a problem trying to ensure that some of my vhosts, which may share a parent directory, are setup correctly. Something like this: node 'service1.ownij.lan' inherits nodes_dev { httpd::vhost::package::site { 'foo_sitea': } httpd::vhost::package::site { 'foo_siteb': } httpd::vhost::package::site { 'bar': } } What I need to happen is that sitea and siteb have the same parent "foo" folder. The problem I am having is when I call a define to ensure the "foo" folder exists. Below is the site define as I have it, hopefully it will make sense what I am trying to accomplish. class httpd::vhost::package { File { owner => root, group => root, mode => 0660 } define site() { $app_parts = split($name, '[_]') $app_primary = $app_parts[0] if ($app_parts[1] == '') { $tpl_path_partial_app = "${app_primary}" $app_sub = '' } else { $tpl_path_partial_app = "${app_primary}/${app_parts[1]}" $app_sub = $app_parts[1] } include httpd::vhost::log::base httpd::vhost::log::app { $name: app_primary => $app_primary, app_sub => $app_sub } } } class httpd::vhost::log { class base { $paths = [ '/tmp', '/tmp/var', '/tmp/var/log', '/tmp/var/log/httpd', "/tmp/var/log/httpd/${service_env}" ] file { $paths: ensure => directory } } define app($app_primary, $app_sub) { $paths = [ "/tmp/var/log/httpd/${service_env}/${app_primary}", "/tmp/var/log/httpd/${service_env}/${app_primary}/${app_sub}" ] file { $paths: ensure => directory } } } The include httpd::vhost::log::base works fine, because it is "included", which means it is only implemented once, even though site is called multiple times. The error I am getting is: Duplicate declaration: File[/tmp/var/log/httpd/dev/foo]. I looked into using exec, but not sure this is the correct route, surely others have had to deal with this before and any insight is appreciated as I have been grappling with this for a few weeks. Thanks.

    Read the article

  • New security configuration flag in UCM PS3

    - by kyle.hatlestad
    While the recent Patch Set 3 (PS3) release was mostly focused on bug fixes and such, a new configuration flag was added for security. In 10gR3 and prior versions, UCM had a component called Collaboration Manager which allowed for project folders to be created and groups of users assigned as members to collaborate on documents. With this component came access control lists (ACL) for content and folders. Users could assign specific security rights on each and every document and folder within a project. And it was possible to enable these ACL's without having the Collaboration Manager component enabled. But it took some special instructions (see technote# 603148.1) and added some extraneous pieces still related to Collaboration Manager. When 11g came out, Collaboration Manager was no longer available. But the configuration settings to turn on ACLs were still there. Well, in PS3 they've been cleaned up a bit and a new configuration flag has been added to simply turn on the ACL fields and none of the other collaboration bits. To enable ACLs: UseEntitySecurity=true Along with this configuration flag to turn ACLs on, you also need to define which Security Groups will honor the ACL fields. If an ACL is applied to a content item with a Security Group outside this list, it will be ignored. SpecialAuthGroups=HumanResources,Legal,Marketing Save the settings and restart the instance. Upon restart, two new metadata fields will be created: xClbraUserList, xClbraAliasList. If you are using OracleTextSearch as the search indexer, be sure to run a Fast Rebuild on the collection. On the Check In, Search, and Update pages, values are added by simply typing in the value and getting a type-ahead list of possible values. Select the value, click Add and then set the level of access (Read, Write, Delete, or Admin). If all of the fields are blank, then it simply falls back to just Security Group and Account access. As for how they are stored in the metadata fields, each entry starts with it's identifier: ampersand (&) symbol for users, "at" (@) symbol for groups, and colon (:) for roles. Following that is the entity name. And at the end is the level of access in paranthesis. e.g. (RWDA). And each entry is separated by a comma. So if you were populating values through batch loader or an external source, the values would be defined this way. Detailed information on Access Control Lists can be found in the Oracle Fusion Middleware System Administrator's Guide for Oracle Content Server.

    Read the article

  • How to Create a bootable Ubuntu USB flash drive from terminal

    - by Avinash Raj
    Is there any possibility to create a Ubuntu USB flash drive from terminal without using any third-party applications like YUMI,Unetbootin,etc. I tried to create a bootable ubuntu flash drive with dd method, sudo umount /dev/sdb sudo dd if=/path/to/ubuntu.iso of=/dev/sdb bs=1M It create files on the USB disk,but when i try to boot the usb disk it says Operating System Not Found error. Any help will be appreciated!

    Read the article

  • Keeping up with New Releases

    - by Jeremy Smyth
    You can keep up with the latest developments in MySQL software in a number of ways, including various blogs and other channels. However, for the most correct (if somewhat dry and factual) information, you can go directly to the source.  Major Releases  For every major release, the MySQL docs team creates and maintains a "nutshell" page containing the significant changes in that release. For the current GA release (whatever that is) you'll find it at this location: https://dev.mysql.com/doc/mysql/en/mysql-nutshell.html  At the moment, this redirects to the summary notes for MySQL 5.6. The notes for MySQL 5.7 are also available at that website, at the URL http://dev.mysql.com/doc/refman/5.7/en/mysql-nutshell.html, and when eventually that version goes GA, it will become the currently linked notes from the URL shown above. Incremental Releases  For more detail on each incremental release, you can have a look at the release notes for each revision. For MySQL 5.6, the release notes are stored at the following location: http://dev.mysql.com/doc/relnotes/mysql/5.6/en/ At the time I write this, the topmost entry is a link for MySQL 5.6.15. Each linked page shows the changes in that particular version, so if you are currently running 5.6.11 and are interested in what bugs were fixed in versions since then, you can look at each subsequent release and see all changes in glorious detail. One really clever thing you can do with that site is do an advanced Google search to find exactly when a feature was released, and find out its release notes. By using the preceding link in a "site:" directive in Google, you can search only within those pages for an entry. For example, the following Google search shows pages within the release notes that reference the --slow-start-timeout option:     site:http://dev.mysql.com/doc/relnotes/mysql/ "--slow-start-timeout" By running that search, you can see that the option was added in MySQL 5.6.5 and also rolled into MySQL 5.5.20.   White Papers Also, with each major release you can usually find a white paper describing what's new in that release. In MySQL 5.6 there was a "What's new" whitepaper at this location: http://www.mysql.com/why-mysql/white-papers/whats-new-mysql-5-6/ You'll find other white papers at: http://www.mysql.com/why-mysql/white-papers/ Search the page for "5.6" to see any papers dealing specificallly with that version.

    Read the article

  • Algorithm to optimize grouping

    - by Jeroen
    I would like to know if there's a known algorithm or best practice way to do the following: I have a collection with a subcollection, for example: R1 R2 R3 -- -- -- M M M N N L L A What i need is an algorithm to get the following result: R1, R2: M N L R2: A R3: M This is -not- what i want, it has more repeating values for R than the above: R1, R2, R3: M R1, R2: N L R2: A I need to group in way that i get the most optimized groups of R. The least amount of groups of R the better so i get the largest sub collections. Another example (with the most obvious result): R1 R2 R3 -- -- -- M M A V V B L L C Should result in: R1, R2: M V L R3: A B C I need to do this in LINQ/C#. Any solutions? Tips? Links?

    Read the article

  • Mounting a drive in Ubuntu 9.10 (Karmic Koala)

    - by morpheous
    I have just installed Ubuntu on a machine that previously had XP installed on it. The machine has 2 HDD (hard disk drives). I opted to install Ubuntu completely over XP. I am new to Linux, and I am still learning how to navigate teh file structure. However, AFAICT), there is only one drive. I want to be able to store programs etc on the first drive, and store data (program output etc) on the second drive. It appears Ubuntu is not aware that I have 2 drives (on XP, these were drives C and D). How can I mount the second drive (ideally, I want to do this automatically on login, so that the drive is available to me whenever I login - withou manual intervention from me) In XP, I could refer to files on a specific drive by prefixing with the drive letter (e.g. c:\foobar.cpp and d:\foobar.dat). I suspect the notation on ubuntu is different. How may I specify specific files on different drives? Last but notbthe least (a bit unrelated to previous questions). This relates to direcory structure again. I am a developer (C++ for desktops and PHP for websites), I want to install the following apps/ libraries. i). Apache 2.2 ii). PHP 5.2.11 iii). MySQL (5.1) iv). SVN v). Netbeans vi). C++ development tools (gcc, gdb, emacs etc) vii). QT toolkit viii). Some miscellaeous scientific software (e.g. www.r-project.org, www.gnu.org/software/octave/) I would be grateful if a someone can recommend a directory layout for these applications. Regarding development, I would also be grateful if someone could point out where to store my project and source files i.e: (i) *.cpp, *.hpp, *.mak files for cpp projects (ii) individual websites On my XP machine the layout for C++ dev was like this: c:\dev\devtools (common libs and headers etc) c:\dev\workarea (root folder for projects) c:\dev\workarea\c++ (c++ projects) c:\dev\workarea\websites (web projects) I would like to create a similar folder structure on the linux machine, but its not clear whether to place these folders under /, /usr, /home or swomewhere else (there seems to be abffling number of choices, so I want to get it "right" first time - i.e having a directory structure that most developer use, so it is easier when communicating with other ubuntu/linux developers)

    Read the article

  • Ubuntu unattended-upgrades stops apache

    - by Robbie
    This morning i was alerted to the fact that both apache instances serving my app were not responding to requests from my load balancer. I attempted apachectl restart and it said apache was not running. So, i started apache on both instances and got the service up again. I then followed the logs and worked out that both had performed upgrades via the unattended-upgrades package moments before they stopped responding. /var/log/unattended-upgrades/unattended-upgrades.log 2013-07-02 06:30:51,875 INFO Starting unattended upgrades script 2013-07-02 06:30:51,875 INFO Allowed origins are: ['o=Ubuntu,a=precise-security'] 2013-07-02 06:33:57,771 INFO Packages that are upgraded: accountsservice apache2 apache2-mpm-prefork apache2-utils apache2.2-bin apache2.2-common apparmor apport apt apt-transport-https apt-utils bind9-host binutils dbus dnsutils gnupg gpgv isc-dhcp-client isc-dhcp-common krb5-locales libaccountsservice0 libapt-inst1.4 libapt-pkg4.12 libbind9-80 libc-bin libc-dev-bin libc6 libc6-dev libcurl3-gnutls libdbus-1-3 libdbus-glib-1-2 libdns81 libdrm-intel1 libdrm-nouveau1a libdrm-radeon1 libdrm2 libexpat1 libfreetype6 libgc1c2 libgnutls-dev libgnutls-openssl27 libgnutls26 libgnutlsxx27 libisc83 libisccc80 libisccfg82 liblwres80 libruby1.8 libx11-6 libx11-data libxcb1 libxext6 libxml2 linux-firmware linux-image-virtual linux-libc-dev linux-virtual multiarch-support openssl perl perl-base perl-modules python-apport python-crypto python-keyring python-problem-report python-software-properties ri1.8 ruby1.8 ruby1.8-dev sudo tzdata update-manager-core 2013-07-02 06:33:57,772 INFO Writing dpkg log to '/var/log/unattended-upgrades/unattended-upgrades-dpkg_2013-07-02_06:33:57.772399.log' 2013-07-02 06:36:10,584 INFO All upgrades installed I'm running Ubuntu 12.04 on Amazon EC2 servers. I have unattended-upgrades installed and configured as follows: /etc/apt/apt.conf.d/50unattended-upgrades // Automatically upgrade packages from these (origin:archive) pairs Unattended-Upgrade::Allowed-Origins { "${distro_id}:${distro_codename}-security"; // "${distro_id}:${distro_codename}-updates"; // "${distro_id}:${distro_codename}-proposed"; // "${distro_id}:${distro_codename}-backports"; }; // List of packages to not update Unattended-Upgrade::Package-Blacklist { }; /etc/apt/apt.conf.d/20auto-upgrades APT::Periodic::Update-Package-Lists "1"; APT::Periodic::Unattended-Upgrade "1"; I've struggled to find documentation about what happens to running processes during an upgrade. - Is this expected behaviour? Or should unattended-upgrades restart apache after upgrading it? - What can I do to ensure apache is restarted correctly? Should I just blacklist the apache package?

    Read the article

  • execute script with sudo after login

    - by Kalamalka Kid
    i need to execute the following commands AFTER login. sudo hdparm -y /dev/disk/by-uuid/443AFBAD7FE50945 sudo hdparm -y /dev/disk/by-uuid/7ABB49654B799D40 (trying to edit rc.local does not work nor does using hdparm.conf because as soon as I log in the disks start up again). I have tried numerous things like bash files and autossh entries in the startup applications with no luck because sudo is involved.

    Read the article

  • Is Unix/Linux adoption a measure of developer skill? [closed]

    - by adeptsage
    Is Linux/Unix based OS adoption for development (or recreation) a rough measure of the awareness, adaptability and/or skill of a developer, after considering factors like learning curve for effective code testing via the CLI, basic scripting, tweaking, etc. I have no intention of starting flame wars between Windows and Linux as better dev environments, and personal choice of OSes for dev, but what i do wish to know is that - will the level of adaptability of a person who uses a Unix based system be defined by the fact that he programs on a Unix based OS?

    Read the article

  • Créer des QR Codes avec Zxing et Java 2D en 5 minutes, par Thierry Leriche-Dessirier

    Bonjour, Je vous propose un article intitulé "Créer des QR Codes avec Zxing et Java 2D, en 5 minutes". Ce petit article s'intéresse à la génération de QR codes en Java. Nous allons voir qu'il est relativement simple de créer une matrice de modules à l'aide de Zxing puis d'en faire une jolie image avec Java 2D. L'article est visible ici : http://thierry-leriche-dessirier.dev...-java2d-5-min/ Et je vous invite aussi à lire mes autres articles ici : http://thierry-leriche-dessirier.dev...#page_articles ...

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >