Search Results

Search found 317 results on 13 pages for 'irc'.

Page 11/13 | < Previous Page | 7 8 9 10 11 12 13  | Next Page >

  • can anyone explain the why the the 1st example gets different results than the following 2

    - by klumsy
    $b = (2,3) $myarray1 = @(,$b,$b) $myarray1[0].length #this will be 1 $myarray1[1].length $myarray2 = @( ,$b ,$b ) $myarray2[0].length #this will be 2 $myarray[1].length $myarray3 = @(,$b ,$b ) $myarray3[0].length #this will be 2 $myarray3[1].length UPDATE I think on #powershell IRC we have worked it out, Here is another example that demonstrates the danger of breaking with the comma on the following line rather than the top line when listing multiple items in an array over multiple lines. $b = (1..20) $a = @( $b, $b ,$b, $b, $b ,$b) for($i=0;$i -lt $a.length;$i++) { $a[$i].length } "--------" $a = @( $b, $b ,$b ,$b, $b ,$b) for($i=0;$i -lt $a.length;$i++) { $a[$i].length } produces 20 20 20 20 20 20 -------- 20 20 20 1 20 20 I'm curious how people will explain this. I think i understand it now, but would have trouble explaining it in a concise understandable fashion, though the above example goes somewhat towards that goal.

    Read the article

  • "import as" leads to unresolved import error, "from .. import" does not

    - by Markus R.
    I'm trying to write some plugins for the irc bot supybot with eclipse/pydev. Pydev gives me errors about unresolved imports on supybot-modules/packages (e. g. import supybot.utils as utils), but works ok on e. g. "from supybot.commands import *". So I guess I set up dydev correctly, as it finds the wanted modules. The problem must be in pydev/eclipse, as the bot works correct and in eric5 I get also no errors about that. Removing the interpreter and setting it up didn't help. Any other ideas on how to fix this? System: Arch Linux, Eclipse Juno, PyDev 2.7.1, wanted (and set up) python interpreter is 2.7, supybot is installed in site-packages for Python 2.7. Edit: Just noticed: PyDev doesn't mark the "from ... import *" as error, but if I use functions imported from there I get an error on that function.

    Read the article

  • How might one detect the first run of a program?

    - by Julian H. Lam
    In my web application, users can download a .tar.gz archive containing the app files. However, because the MySQL database won't have been configured then, the user needs to run the install script located in ./install. I "catch" the user upon first run of the app by checking to see if the ./install directory exists. If so, the index.php page redirects the user to the install script. However, I was wondering if there were a more elegant way to "catch" a user upon first-run of a program. Someone on IRC suggested the web-server create a file .installed upon completion, but because the web server might not have write privileges to the web root directory, I can't rely on that. How would you go about solving this problem, or is my solution workable?

    Read the article

  • In Scala 2.8 collections, why was the Traversable type added above Iterable?

    - by Seth Tisue
    I know that to be Traversable, you need only have a foreach method. Iterable requires an iterator method. Both the Scala 2.8 collections SID and the "Fighting Bitrot with Types" paper are basically silent on the subject of why Traversable was added. The SID only says "David McIver... proposed Traversable as a generalization of Iterable." I have vaguely gathered from discussions on IRC that it has to do with reclaiming resources when traversal of a collection terminates? The following is probably related to my question. There are some odd-looking function definitions in TraversableLike.scala, for example: def isEmpty: Boolean = { var result = true breakable { for (x <- this) { result = false break } } result } I assume there's a good reason that wasn't just written as: def isEmpty: Boolean = { for (x <- this) return false true }

    Read the article

  • Why operator= returns reference not const reference

    - by outmind
    The original question is related to overloading operator= and I like to share my findings as it was nontrivial for me to find them. I cannot imagine reasonable example to use (a=b) as lvalue. With the help of IRC and google I've found the next article: http://msdn.microsoft.com/en-us/magazine/cc301415.aspx it provides two examples. (a=b)=c f(T& ); f(a=b) but both a bit not good, as first violate associativity and I believe that it is bad practice. The second one give me the same feeling. Could you provide more good examples why it should be non constant?

    Read the article

  • How can I gzinflate and save the inflated data without running it? (Found what I think is a trojan o

    - by Rob
    Well, not my server. My friend found it and sent it to me, trying to make sense of it. What it appears to be is a PHP IRC bot, but I have no idea how to decode it and make any sense of it. Here is the code: <?eval(gzinflate(base64_decode('some base 64 code here')))?> So I decoded the base64, and it output a ton of strange characters, I'm guessing either encrypted or a different file type, like when you change a .jpg to a .txt and open it. But I have no idea how to decode this and determine its source. Any help?

    Read the article

  • How to make socket.recv(500) not stop a while loop.

    - by ImTooStupidForThis
    I made an IRC bot which uses a while true loop to receive whatever is said. To receive I use recv(500), but that stops the loop if there isn't anything to receive, but i need the loop to continue even if there isn't anything to receive. I need a makeshift timer to continue running. Example code: /A lot of stuff/ timer=0 while 1: timer=timer+1 line=s.recv(500) #If there is nothing to receive, the loop and thus the timer stop. /A lot of stuff/ So either I need a way to stop it stopping the loop, or I need a better timer.

    Read the article

  • A big flat text file or a HTML site for language documentation?

    - by Bad Sector
    A project of mine is a small embeddable Tcl-like scripting language, LIL. While i'm mostly making it for my own use, i think it is interesting enough for others to use, so i want it to have a nice (but not very "wordy") documentation. So far i'm using a single flat readme.txt file. It explains the language's syntax, features, standard functions, how to use the C API, etc. Also it is easy to scan and read in almost every environment out there, from basic text-only terminals to full-fledged high-end graphical desktop environments. However, while i tried to keep things nicely formatted (as much as this is possible in plain text), i still think that being a big (and growing) wall of text, it isn't as easy on the eyes as it could be. Also i feel that sometimes i'm not writing as much as i want in order to avoid expanding the text too much. So i thought i could use another project of mine, QuHelp, which is basically a help site generator for sites like this one with a sidebar that provides a tree of topics/subtopics and offline full text search. With this i can use HTML to format the documentation and if i use QuHelp for some other project that uses LIL, i can import LIL's documentation as part of the other project's documentation. However converting the existing documentation to QuHelp/HTML isn't a small task, especially when it comes to functions (i'll need to put more detail on them than what currently exists in the readme.txt file). Also it loses the wide range of availability that it currently has (even if QuHelp's generated code degrades gracefully down to console-only web browsers, plain text is readable from everywhere, including from popular editors such as Vim and Emacs - i had someone once telling me that he likes LIL's documentation because it is readable without leaving his editor). So, my question is simply this: should i keep the documentation as it is now in the form of a single readme.txt file or should i convert it to something like the site i mentioned above? There is also the option to do both, but i'm not sure if i'll be able to always keep them in sync or if it is worth the effort. After asking around in IRC i've got mixed answers: some liked the wide availability of the single text file, others said that it is looks as bad as a man page (personally i don't mind that - i can read man pages just fine - but other people might have issues reading them). What do you think?

    Read the article

  • What does the `dmesg` error: "composite sync not supported" mean?

    - by M. Tibbits
    Question: I see [ 20.473125] composite sync not supported and several such entries when I run dmesg. What do they mean? Background: I'm trying to debug a problem where my laptop won't suspend. Since acpi seems happy and I can suspend easily from the command line, I've turned to tracking down all boot-up errors/warnings. So I run dmesg | grep not and, amongst other shtuff, I get: 728:[ 17.267120] composite sync not supported 733:[ 18.009061] composite sync not supported 740:[ 18.159289] registered panic notifier 749:[ 18.162500] vga16fb: not registering due to another framebuffer present 757:[ 18.598251] composite sync not supported 776:[ 20.473125] composite sync not supported 777:[ 20.932266] composite sync not supported 778:[ 28.350231] composite sync not supported 779:[ 28.924913] composite sync not supported 780:[ 35.480658] composite sync not supported And the full log for the few lines right around that first appearance (line 728) is listed at the bottom of my post (I'd happily include anything else). Any ideas what could be causing this? I've read several sites: Ubuntuforums #1 IRC Chat #1 One post talks about ??Adobe flash?? causing this error? Some others also suggest that it might be an nvidia related problem, but I've got a Dell Latitude D630 with an integrated Intel graphics -- so nvidia isn't the problem. [ 17.207142] phy0: Selected rate control algorithm 'minstrel' [ 17.207833] Registered led device: b43-phy0::tx [ 17.207849] Registered led device: b43-phy0::rx [ 17.207865] Registered led device: b43-phy0::radio [ 17.207927] Broadcom 43xx driver loaded [ Features: PL, Firmware-ID: FW13 ] [ 17.267120] composite sync not supported [ 17.415795] EXT4-fs (sda2): mounted filesystem with ordered data mode [ 17.602131] [drm] initialized overlay support [ 17.620201] input: DualPoint Stick as /devices/platform/i8042/serio1/input/input7 [ 17.641192] input: AlpsPS/2 ALPS DualPoint TouchPad as /devices/platform/i8042/serio1/input/input8 [ 18.009061] composite sync not supported [ 18.106042] pcmcia_socket pcmcia_socket0: cs: IO port probe 0x100-0x3af: clean. [ 18.108115] pcmcia_socket pcmcia_socket0: cs: IO port probe 0x3e0-0x4ff: clean. [ 18.108941] pcmcia_socket pcmcia_socket0: cs: IO port probe 0x820-0x8ff: clean. [ 18.109676] pcmcia_socket pcmcia_socket0: cs: IO port probe 0xc00-0xcf7: clean. [ 18.110356] pcmcia_socket pcmcia_socket0: cs: IO port probe 0xa00-0xaff: clean. [ 18.159286] fb0: inteldrmfb frame buffer device [ 18.159289] registered panic notifier [ 18.160218] input: Video Bus as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A03:00/LNXVIDEO:01/input/input9 [ 18.160286] ACPI: Video Device [VID1] (multi-head: yes rom: no post: no) [ 18.160334] ACPI Warning for \_SB_.PCI0.VID2._DOD: Return Package has no elements (empty) (20090903/nspredef-433) [ 18.160432] input: Video Bus as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A03:00/LNXVIDEO:02/input/input10 [ 18.160491] ACPI: Video Device [VID2] (multi-head: yes rom: no post: no) [ 18.160539] [drm] Initialized i915 1.6.0 20080730 for 0000:00:02.0 on minor 0 [ 18.162494] vga16fb: initializing [ 18.162497] vga16fb: mapped to 0xc00a0000 [ 18.162500] vga16fb: not registering due to another framebuffer present [ 18.176091] HDA Intel 0000:00:1b.0: PCI INT A -> GSI 21 (level, low) -> IRQ 21 [ 18.176123] HDA Intel 0000:00:1b.0: setting latency timer to 64 [ 18.285752] input: HDA Digital PCBeep as /devices/pci0000:00/0000:00:1b.0/input/input11 [ 18.312497] input: HDA Intel Mic at Ext Left Jack as /devices/pci0000:00/0000:00:1b.0/sound/card0/input12 [ 18.312586] input: HDA Intel HP Out at Ext Left Jack as /devices/pci0000:00/0000:00:1b.0/sound/card0/input13 [ 18.328043] usbcore: registered new interface driver ndiswrapper [ 18.460909] Console: switching to colour frame buffer device 180x56 [ 18.598251] composite sync not supported

    Read the article

  • How do I configure sound with PulseAudio and Multiseat?

    - by Anthony
    In the spirit of full disclosure, i just posted this question to the ubuntu forums, but i figure more heads working on it couldn't hurt. I have a multi-seat setup working quite well. Hot plugging input devices works as expected and such. The only issue I am still not able to resolve is getting the audio for each seat. Here is a summary of my attempts at getting audio to work: Make ~/.pulse/default.pa dynamically configured based on which $DISPLAY the user logs in at. See this pastebin for the details. Load pulseaudio as a system-wide instance. Couldn't get this to work. None of the audio hardware was accessible to the users. Use udev rules to mark seats in ConsoleKit. Following udev guidelines found here: http://www.freedesktop.org/wiki/Software/systemd/multiseat I didn't think this would work, although it was "guaranteed" to work by someone in irc.freenode #pulseaudio None of those attempts yielded success, which is why I now turn to the community for help. It is quite possible that the suggested methods work and I just messed some aspect of it up, idk. This is the last piece of the puzzle which is needed before I can go and update the MultiseatX page to include instructions for Ubuntu 12.04. My understandings on the situation: Access to pulseaudio is restricted to the active session as marked by ConsoleKit (something about an ACL). CK can only mark one session as active at a time. This simple little fact of life leads me to believe that the solution should involve pulseaudio being run as a system-wide instance. Each user should connect to the pulse server and be limited to a subset of all the hardware. Maybe each user connects to the pulse server via localhost, idk. I do know that regardless of my attempts and their failed results, I was always able to use sudo aplay -D plughw:0,0 /usr/share/sounds/alsa/Front_Center.wav to play something to any of the hardware. I'm grasping at straws and am now down to the last few hairs i can pull out of my head. Please, help me figure this out so we can share the wealth. Any additional information needed will be provided at your request.

    Read the article

  • vJUG: Worldwide Virtual JUG Created

    - by Tori Wieldt
    London Java Community leader and technical evangelist Simon Maple has created a Meetup called vJUG, with aim toward connecting Java Developers in the virtual world. The aim for vJUG is: Get technical leaders from around the world to present to the vJUG members (without travel cost concerns!). Work with local JUGs to provide worldwide content to their members and help JUGs present to a worldwide audience. Provide content to devs without access to a local JUG. Be a hub that will stream content from other JUG sessions live.  The vJUG is not intended to replace local JUG efforts. "The vJUG can never be, and will never be, as vibrant and valuable to its members as a proper local JUG can. Why? Because the true value in JUG meetings are the face to face interactions and personal networking," said Maple. "However, many people do not have access to a really active JUG with great speakers and awesome content. Or, like me, the closest JUG is about 90 mins away." WebEx and Google Hangouts are great, Maple explained, he hopes vJUG will provide more coordination of online events.  Maple hopes that in the future, vJUG will provide An Events calendar with reminders and links to up coming meetings. A Newsletter with what's coming up and links to previous sessions. Coordination of links to IRC channels which are active during presentations (to create a feeling of virtual community). Comments and forums around sessions and presentations A place where physical JUGs could advertise their sessions (i.e. a NY JUG event) to a worldwide audience, when streamed, via an event that people can sign up to. A common Webex or Hangout. Maple encourages both people who need a JUG and existing JUG members to join vJUG. "I'm looking forward to talking with many of you one to get members, speakers, and JUG support!" Join vJUG now! (I sense a need for a logo...) 

    Read the article

  • Connecting Adium to Google Talk with a 2-factor authentication account isn’t working

    - by Robin
    Anyone else having this problem? After turning on 2-factor authentication on my Google Account I stopped being able to log in through Adium (Mac IM client that uses Pidgin’s libpurple for IM). Obviously you need to generate an application-specific password but these won’t let me log in. Application specific passwords work with other applications (e.g. Reeder for feeds and calendering on my phone). Google specifically mention Adium in their examples of setting up an application password for Google Talk so I doubt it’s a generic Adium problem. I can still access Google Talk for this account if I use a talk widget on a Google Website (Plus, or iGoogle for example). My bug report to Adium including a connection log file is up on their Trac: http://trac.adium.im/ticket/15310 . No activity there though. I also asked around in their IRC channel but no-one else could replicate the problem. If I had to guess then I’d think it was a consequence of me not having a GMail account associated with my Google account. I don’t see exactly why that would cause it, but it seems like a fairly unusual setup that might not have been tested for.

    Read the article

  • Configure Postfix to send/relay emails Gmail (smtp.gmail.com) via port 587

    - by tom smith
    Hi. Using Centos 5.4, with Postfix. I can do a mail [email protected] subject: blah test . Cc: and the msg gets sent to gmail, but it resides in the spam folder, which is to be expected. My goal is to be able to generate email msgs, and to have them appear in the regular Inbox! As I understand Postfix/Gmail, it's possible to configure Postfix to send/relay mail via the authenticated/valid user using port 587, which would no longer have the mail be seen as spam. I've tried a number of parameters based on different sites/articles from the 'net, with no luck. Some of the articles, actually seem to conflict with other articles! I've also looked over the stacflow postings on this, but i'm still missing something... Also talked to a few people on IRC (Centos/Postfix) and still have questions.. So, i'm turning to Serverfault, once again! If there's someone who's managed to accomplish this, would you mind posting your main.cf, sasl-passwd, and any other conf files that you use to get this working! If I can review your config files, I can hopefully see where I've screwed up, and figure out how to correct the issue. Thanks for reading this, and any help/pointers you provide! ps, If there is a stackflow posting that speaks to this that I may have missed, feel free to point it out to me! -tom

    Read the article

  • Server 2008 R2 - Unable to install any printer drivers

    - by toffitomek
    My problem is exactly this same as described here: Server 2008 R2 - Unable to install any printer drivers Options - google groups I have few Windows 2008 R2 (no SP1) servers in remote offices, mostly in Domain Controller, and many of them have problems installing ANY printer drivers. following errors show up in Event Log when adding printer driver under Print and Document Services/Print Management/Print Services/ /Drivers OR trying any other way to install drivers: EventID 215: Installing printer driver - failed, error code 0x57, HRESULT 0x80070057. See the event user data for context information. EventID 215: Installing printer driver Canon iR C2380/2550 PCL6 failed, error code 0x0, HRESULT 0x80070057. See the event user data for context information. EventID 215: Installing printer driver Canon iR C2380/2550 PCL6 failed, error code 0x490, HRESULT 0x80070057. See the event user data for context information. in this particular server case this is problem with Canon iRC 2380i printer with Canon Generic PCL6 Driver, but it seems to apply to any driver and any printer (tried different drivers, different versions, PCL, postscript, etc) I'm using 64 bit drivers that should be working on this platform. Any help will be appreciated.

    Read the article

  • SSH dynamic port forwarding, "Connection refused"

    - by crodjer
    I am trying to do dynamic portforwarding using openssh through a remote computer following this command: ssh -D 6789 rohan@<remote_ip> -p <remote_port> This should set up a socks server on my comp as I assume. I am able to use this for normal browsing but can't connect to IRC or remote ssh (through proxychains). I get this error: channel 3: open failed: connect failed: Connection refused A high verbosity level output of the error: $ debug1: Connection to port 6789 forwarding to socks port 0 requested. debug2: fd 9 setting TCP_NODELAY debug2: fd 9 setting O_NONBLOCK debug3: fd 9 is O_NONBLOCK debug1: channel 3: new [dynamic-tcpip] debug2: channel 3: pre_dynamic: have 0 debug2: channel 3: pre_dynamic: have 4 debug2: channel 3: decode socks5 debug2: channel 3: socks5 auth done debug2: channel 3: pre_dynamic: need more debug2: channel 3: pre_dynamic: have 0 debug2: channel 3: pre_dynamic: have 10 debug2: channel 3: decode socks5 debug2: channel 3: socks5 post auth debug2: channel 3: dynamic request: socks5 host 4.2.2.2 port 53 command 1 debug3: Wrote 96 bytes for a total of 3335 channel 3: open failed: connect failed: Connection refused debug2: channel 3: zombie debug2: channel 3: garbage collecting debug1: channel 3: free: direct-tcpip: listening port 6789 for 4.2.2.2 port 53, connect from 127.0.0.1 port 33694, nchannels 4 debug3: channel 3: status: The following connections are open: #2 client-session (t4 r0 i0/0 o0/0 fd 6/7 cfd -1) debug3: channel 3: close_fds r 9 w 9 e -1 c -1 I googled for this too, but couldn't find any solutions.

    Read the article

  • Stream video file in debian?

    - by Rob
    I've tried ffserver with ffmpeg, I've tried VLC, and I'm not sure what else to try or what I've done wrong. I've gone through, with VLC +-[ robert@s10 ]--[ ~ ] +[#!]¬ vlc --version VLC media player 2.0.0 Twoflower (revision 2.0.0-0-g421a4fc) VLC version 2.0.0 Twoflower (2.0.0-0-g421a4fc) Compiled by buildd on biber.debian.org (Mar 1 2012 22:21:37) Compiler: gcc version 4.6.2 (Debian 4.6.2-14) This program comes with NO WARRANTY, to the extent permitted by law. You may redistribute it under the terms of the GNU General Public License; see the file named COPYING for details. Written by the VideoLAN team; see the AUTHORS file. and tried everything I could in the streaming section, but I can't get the stream to actually work. Looking around, apparently debian strips the encoders from the package? I want to do share some videos I've made with friends on IRC, and it would be easiest if I could just stream it so we can all watch at the same time and critique parts of it in real time. Has anyone done something similar? Linux s10 3.2.0-2-686-pae #1 SMP Tue Mar 20 19:48:26 UTC 2012 i686 GNU/Linux Basic home network, I am behind a NAT (192.168.1.*) and have dynamic DNS set up. That doesn't really matter too much, I can figure that out, but it's not even working locally. I have a file server set up and could just share the files that way, but I'd rather have everyone watching at the same time (or just about). Not worried about installing new packages or building something from source, that's not a big issue, just want to get it working. Big plus if I can do it from command line.

    Read the article

  • Diagnosing linux issues with ipod syncing in Ubuntu

    - by alexpotato
    Issue: I am currently using Ubuntu 9.10 with a 5th generation Ipod 60 GB Black video classic. In general, it seems that Ubuntu can always detect the usb hd and displays it on my desktop. However, some applications seem to detect the ipod (e.g. Rythymbox and gtkpod do but Banshee does not) and some don't. I narrowed down the banshee issue to a bug that requires Nautilus to be restarted (although it would be nice to not have to do this). Also, Whenever I sync between these applications, it appears that everything is working fine during the sync but when I disconnect the ipod and browse, all of the songs seem to be there but the playlists are not. If I reconnect the ipod, in banshee specifically it sees the space usage as "other". What I am looking for is some way to at least understand what is and is not working OR directions to some where that can help me learn what's going on. I have already tried: -IRC. Either the channel is too general (e.g. #ubuntu) or no one is ever one (e.g. #banshee) -The web. Most of what I've found is too specific to one particular bug or too general. Any thoughts?

    Read the article

  • Name one good reason for immediately failing on a SMTP 4xx code

    - by Avery Payne
    I'm really curious about this. The question (highlighed in bold): Can someone name ONE GOOD REASON to have their email server permanently set up to auto-fail/immediate-fail on 4xx codes? Because frankly, it sounds like "their" setups are broken out-of-the-box. SMTP is not Instant Messaging. Stop treating it like IRC or Jabber or MSN or insert-IM-technology-here. I don't know what possesses people to have the "IMMEDIATE DELIVERY OR FAIL" mentality with SMTP setups, but they need to stop doing that. It just plain breaks things. Every two or three years, I stumble into this. Someone, somewhere, has decided in their infinite wisdom that 4xx codes are immediate failures, and suddenly its OMGWTFBBQ THE INTARNETZ ARE BORKEN, HALP SKY IS FALLING instead of "oh, it'll re-attempt delivery in about 30 minutes". It amazes me how it suddenly becomes "my" problem that a message won't go through, because someone else misconfigured "their" SMTP service. IF there is a legitimate reason for having your server permanently set up in this manner, then the first good answer will get the check. IF there is no good reason (and I suspect there isn't), then the first good-sounding-if-still-logically-flawed answer will get the check.

    Read the article

  • Throughput; capacity planning help for C10K like design

    - by z8000
    I am designing a network service in which clients connect and stay connected -- the model is not far off from IRC less the s2s connections. I could use some help understanding how to do capacity planning, in particular with the system resource costs associated with handling messages from/to clients. There's an article that tried to get 1 million clients connected to the same server [1]. Of course, most of these clients were completely idle in the test. If the clients sent a message every 5 seconds or so the system would surely be brought to its knees. But... How do you do less hand-waving and you know, measure such a breaking point? We're talking about messages being sent by a client over a TCP socket, into the kernel, and read by an application. The data is shuffled around in memory from one buffer to another. Do I need to consider memory throughput ("5 GT/s" [2], etc.)? I'm pretty sure I have the ability to measure the basic memory requirements due to TCP/IP buffers, expected bandwidth, and CPU resources required to process messages. I'm a little dim on what I'm calling "thoughput". Help! Also, does anyone really do this? Or, do most people sort of hand-wave and see what the real world offers, and then react appropriately? [1] http://www.metabrew.com/article/a-million-user-comet-application-with-mochiweb-part-3/ [2] http://en.wikipedia.org/wiki/GT/s

    Read the article

  • Reduce power consumption of gaming computer while idle

    - by White Phoenix
    This is my current build: EVGA X58 (first generation) motherboard Intel i7 965 clocked @ 3.3 Ghz 3x DDR3-1600 Corsair RAM at stock timings and voltages Corsair AX750 80 Plus Gold PSU 1 Optical Drive 1 Seagate 7200.10 500 GB drive 2x Western Digital Caviar Black 1 TB drives OCZ Vertex 1 60 GB EVGA GTX 460 oc'd at 800/1600/1850 Antec 1200 case HT-Omega Striker 7.1 Sound Card Windows 7 32-bit Professional (PAE Enabled) I've already seen this post Reduce power use on computer and this post How do I lower power consumption of my computer and while useful, I'm looking for answers specific to my build and OS. I'm pretty sure this build is a energy-intensive build by default, but I want to try to reduce the amount of energy my build uses when I leave it idle (when I go to bed or go out, etc). The first requirement for this machine is that I need to leave it on, so I cannot turn it off while it's being unused. I run it as a file server for personal reasons and I also leave it on in case people leave me messages on various IM services and chat clients (IRC, MSN, Steam, XFire, Pidgin, etc). I'm also unable to replace the parts in my computer with a cheaper "greener" part. What are some ways to minimize the amount of power the machine uses? I'm already using a high efficiency power supply (80 Plus Gold), but I imagine there's other things that can be done in the BIOS and Windows' power settings to reduce power usage while I'm not using the computer. From what I can tell, I can't use Sleep since that'll disable network access (whole reason why I leave the computer on in the first place). I already turn off my monitor when it's not in use. I enabled Intel SpeedStep within the BIOS (I know, I have a 965 and why am I enabling SpeedStep?) Should I bring the graphics card back to stock speeds and lower the clock on the processor even more? Main reason why I'm asking is I think this computer alone is the reason why my power bill is high, so I want to reduce its consumption to as low as possible without having to shut the thing down.

    Read the article

  • Wireless disconnects at random after upgrade to Ubuntu 10.4

    - by Daniel Elessedil Kjeserud
    After upgrading my home server from Ubuntu 8.10 to 10.4 my wireless seemingly drops out, even though my IRC client keeps it's connection to the servers, so it looks like the machine just stops taking wireless requests. A ping will give a me this Request timeout for icmp_seq 27 ping: sendto: Host is down After a while the machine just starts responding again, without any interaction from me. When the machine comes back, this is what dmesg gives me [ 18.296288] wlan0: direct probe to AP 00:1b:63:22:a4:5f (try 1) [ 18.296350] wlan0: deauthenticating from 00:1b:63:22:a4:5f by local choice (reason=3) [ 18.296440] wlan0: direct probe to AP 00:1b:63:22:a4:5f (try 1) [ 18.298697] wlan0: direct probe responded [ 18.298706] wlan0: authenticate with AP 00:1b:63:22:a4:5f (try 1) [ 18.306836] wlan0: authenticated [ 18.306886] wlan0: associate with AP 00:1b:63:22:a4:5f (try 1) [ 18.309396] wlan0: RX AssocResp from 00:1b:63:22:a4:5f (capab=0x411 status=0 aid=2) [ 18.309402] wlan0: associated [ 18.310187] ADDRCONF(NETDEV_CHANGE): wlan0: link becomes ready [ 18.447742] apm: BIOS version 1.2 Flags 0x03 (Driver version 1.16ac) [ 18.447748] apm: overridden by ACPI. [ 19.163282] padlock: VIA PadLock not detected. [ 28.352022] wlan0: no IPv6 routers present kjes@brin:~$ lspci 02:07.0 Network controller: RaLink RT2561/RT61 rev B 802.11g It's on a wireless network with WPA2, the machine worked without any problems on the same wireless network since Ubuntu 8.10 was the most resent version, and there have been no changes to my network recently. Even though the server drops out, everything else on the network keeps working like normal.

    Read the article

  • How do I ensure a process is running, even if it kills itself? (it needs to be restarted then)

    - by le_me
    I'm using linux. I want a process (an irc bot) to run every time I start the computer. But I've got a problem: The network is bad and it disconnects often, so I need to manually restart the bot a few times a day. How do I automate that? Additional information: The bot creates a pid file, called bot.pid The bot reconnects itself, but only a few times. The network is too bad, so the bot kills itself sometimes because it gets no response. What I do currently (aka my approach ;) ) I have a cron job executing startbot.rb every 5 minutes. (The script itself is in the same directory as the bot) The script: #!/usr/bin/ruby require 'fileutils' if File.exists?(File.expand_path('tmp/bot.pid')) @pid = File.read(File.expand_path('tmp/bot.pid')).chomp!.to_i begin raise "ouch" if Process.kill(0, @pid) != 1 rescue puts "Removing abandoned pid file" FileUtils.rm(File.expand_path('tmp/bot.pid')) puts "Starting the bot!" Kernel.exec(File.expand_path('./bot.rb')) else puts "Bot up and running!" end else puts "Starting the bot!" Kernel.exec(File.expand_path('./bot.rb')) end What this does: It checks if the pid file exists, if that's true it checks if kill -s 0 BOT_PID == 1 (if the bot's running) and starts the bot if one of the two checks fail/are not true. My approach seems to be quite dirty so how do I do it better?

    Read the article

  • which is best smart automatic file replication solution for cloud storage based systems.

    - by TORr0t
    I am looking for a solution for a project i am working on. We are developing a websystem where people can upload their files and other people can download it. (similar to rapidshare.com model) Problem is, some files can be demanded much more than other files. The scenerio is like: I have uploaded my birthday video and shared it with all of my friend, I have uploaded it to myproject.com and it was stored in one of the cluster which has 100mbit connection. Problem is, once all of my friends want to download the file, they cant download it since the bottleneck here is 100mbit which is 15MB per second, but i got 1000 friends and they can only download 15KB per second. I am not taking into account that the hdd is serving same files. My network infrastrucre is as follows: 1 gbit server(client) and connected to 4 Nodes of storage servers that have 100mbit connection. 1gbit server can handle the 1000 users traffic if one of storage node can stream more than 15MB per second to my 1gbit (client) server and visitor will stream directly from client server instead of storage nodes. I can do it by replicating the file into 2 nodes. But i dont want to replicate all files uploadded to my network since it is costing much more. So i need a cloud based system, which will push the files into replicated nodes automatically when demanded to those files are high, and when the demand is low, they will delete from other nodes and it will stay in only 1 node. I have looked to gluster and asked in their irc channel that, gluster cant do such a thing. It is only able to replicate all the files or none of the files. But i need it the cluster software to do it automatically. Any solutions ? (instead of recommending me amazon s3) S

    Read the article

  • How do you get linux to honor setuid directories?

    - by Takigama
    Some time ago while in a conversation in IRC, one user in a channel I was in suggested someone setuid a directory in order for it to inherit the userid on files to solve a problem someone else was having. At the time I spoke up and said "linux doesn't support setuid directories". After that, the person giving the advice showed me a pastebin (http://codepad.org/4In62f13) of his system honouring the setuid permission set on a directory. Just to explain, when i say "linux doesnt support setuid directories" what I mean is that you can go "chmod u+s directory" and it will set the bit on the directory. However, linux (as i understood it) ignores this bit (on directories). Try as I might, I just cant quite replicate that pastebin. Someone suggested to me once that it might be possible to emulate the behaviour with selinux - and playing around with rules, its possible to force a uid on a file, but not from a setuid directory permission (that I can see). Reading around on the internet has been fairly uninformative - most places claim "no, setuid on directories does not work with linux" with the occasional "it can be done under specific circumstances" (such as this: http://arstechnica.com/etc/linux/2003/linux.ars-12032003.html) I dont remember who the original person was, but the original system was a debian 6 system, and the filesystem it was running was xfs mounted with "default,acl". I've tried replicating that, but no luck so far (tried so far with various versions of debian, ubuntu, fedora and centos) Can anyone clue me in on what or how you get a system to honor setuid on a directory?

    Read the article

  • mythbuntu 12 - lirc device doesn't appear to even exist

    - by FrustratedWithFormsDesigner
    I'm trying to get a new installation of Mythbuntu working. So far, everything is OK except the remote. The sensor for the remote is on my Hauppauge WinTV HVR 1250. First I tried to run irw to see what was being picked up by the sensor: $ irw connect: No such file or directory Then trying to run lircd gives: $ lircd start$ lircd start lircd: can't open or create /var/run/lirc/lircd.pid I look for any lirc devices and find there are none: $ ls /dev/li* ls: cannot access /dev/li*: No such file or directory Just to be sure, I check in /proc/bus/input/devices, which shows me two powerbuttons (not sure why), kbd and mouse dev, and the audio devs. Nothing for the IR receiver on the tuner card (which I thought was strange because shouldn't the tuner show up here?). $ cat /proc/bus/input/devices I: Bus=0019 Vendor=0000 Product=0001 Version=0000 N: Name="Power Button" P: Phys=PNP0C0C/button/input0 S: Sysfs=/devices/LNXSYSTM:00/device:00/PNP0C0C:00/input/input0 U: Uniq= H: Handlers=kbd event0 B: PROP=0 B: EV=3 B: KEY=10000000000000 0 I: Bus=0019 Vendor=0000 Product=0001 Version=0000 N: Name="Power Button" P: Phys=LNXPWRBN/button/input0 S: Sysfs=/devices/LNXSYSTM:00/LNXPWRBN:00/input/input1 U: Uniq= H: Handlers=kbd event1 B: PROP=0 B: EV=3 B: KEY=10000000000000 0 I: Bus=0003 Vendor=099a Product=7202 Version=0111 N: Name="Wireless Keyboard/Mouse" P: Phys=usb-0000:00:10.1-2/input0 S: Sysfs=/devices/pci0000:00/0000:00:10.1/usb8/8-2/8-2:1.0/input/input2 U: Uniq= H: Handlers=sysrq kbd event2 B: PROP=0 B: EV=120013 B: KEY=1000000000007 ff9f207ac14057ff febeffdfffefffff fffffffffffffffe B: MSC=10 B: LED=7 I: Bus=0003 Vendor=099a Product=7202 Version=0111 N: Name="Wireless Keyboard/Mouse" P: Phys=usb-0000:00:10.1-2/input1 S: Sysfs=/devices/pci0000:00/0000:00:10.1/usb8/8-2/8-2:1.1/input/input3 U: Uniq= H: Handlers=kbd mouse0 event3 B: PROP=0 B: EV=1f B: KEY=4837fff072ff32d bf54444600000000 70001 20c100b17c000 267bfad9415fed 9e168000004400 10000002 B: REL=143 B: ABS=100000000 B: MSC=10 I: Bus=0000 Vendor=0000 Product=0000 Version=0000 N: Name="HD-Audio Generic Line" P: Phys=ALSA S: Sysfs=/devices/pci0000:00/0000:00:14.2/sound/card0/input4 U: Uniq= H: Handlers=event4 B: PROP=0 B: EV=21 B: SW=2000 I: Bus=0000 Vendor=0000 Product=0000 Version=0000 N: Name="HD-Audio Generic Front Mic" P: Phys=ALSA S: Sysfs=/devices/pci0000:00/0000:00:14.2/sound/card0/input5 U: Uniq= H: Handlers=event5 B: PROP=0 B: EV=21 B: SW=10 I: Bus=0000 Vendor=0000 Product=0000 Version=0000 N: Name="HD-Audio Generic Rear Mic" P: Phys=ALSA S: Sysfs=/devices/pci0000:00/0000:00:14.2/sound/card0/input6 U: Uniq= H: Handlers=event6 B: PROP=0 B: EV=21 B: SW=10 I: Bus=0000 Vendor=0000 Product=0000 Version=0000 N: Name="HD-Audio Generic Front Headphone" P: Phys=ALSA S: Sysfs=/devices/pci0000:00/0000:00:14.2/sound/card0/input7 U: Uniq= H: Handlers=event7 B: PROP=0 B: EV=21 B: SW=4 I: Bus=0000 Vendor=0000 Product=0000 Version=0000 N: Name="HD-Audio Generic Line-Out" P: Phys=ALSA S: Sysfs=/devices/pci0000:00/0000:00:14.2/sound/card0/input8 U: Uniq= H: Handlers=event8 B: PROP=0 B: EV=21 B: SW=40 According to dmesg, the driver was registered, but it doesn't look like any devices was associated with the driver: $ dmesg | grep irc [ 10.631162] lirc_dev: IR Remote Control driver registered, major 249 So far, I've seen a number of forum pages suggesting that I use some trick to create a link between /dev/lirc and some other device that is the REAL IR sensor, like /dev/event5, but those cases assume that the real device is shown from /proc/bus/input/devices, and I don't see any such device there. Any suggestions on how to fix or further diagnose this?

    Read the article

< Previous Page | 7 8 9 10 11 12 13  | Next Page >