Search Results

Search found 21991 results on 880 pages for 'going crazy'.

Page 132/880 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • Dependencies problems installing openjdk on Ubuntu

    - by Rodnower
    I try to install openjdk-7-jre: sudo apt-get install openjdk-7-jre But I have dependencies problems: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: openjdk-7-jre : Depends: openjdk-7-jre-headless (= 7u7-2.3.2a-0ubuntu0.12.04.1) but it is not going to be installed Depends: libgif4 (>= 4.1.4) but it is not installable Depends: libatk-wrapper-java-jni (>= 0.30.4-0ubuntu2) but it is not installable Recommends: libgnome2-0 but it is not installable Recommends: libgnomevfs2-0 but it is not going to be installed Recommends: ttf-dejavu-extra but it is not installable E: Unable to correct problems, you have held broken packages. This is version of Ubuntu: Ubuntu 12.04.1 LTS I completely don't know how resolve dependencies... Some one can help me? Thank you for ahead.

    Read the article

  • How would I setup iMail to forward a user's mail to another service w/o leaving a copy locally?

    - by Scott Mayfield
    I have an iMail 2006 server installation in which I have a particular user that has several aliases that all point to a single user (me, for the record). I've been copying all of my mail to GMail and reading it there, but it annoys me that I have to go back weekly and log into my mail account on iMail and delete between 6 and 10 thousand copies of messages I've already received, in order to keep my mailbox from filling up (yes, I have it set with no quota, but I consider it bad form to just let the box grow indefinitely). I've got the copying setup via an inbound user rule, but I'm wondering how to accomplish a "copy and delete" rule. The manual isn't clear on what happens with multiple matching rules (will they be processed in order, or is it a first match situation?) and there isn't a means to combine multiple actions into a single rule. If I use the "forward" action, I THINK that it's going to screw up all the sender information once the mail reaches my GMail account and show it as coming from me instead of the original senders (can anyone confirm that this is accurate?) An easy answer would be to delete my user account entirely, replace it with an alias that maps to my GMail account, but then I would lose my ability to log into the system for admin duties. So that leads me to creating a second, lesser known account for admin use, but since it's a real account, sooner or later I'm going to get mail sent to it and I'll be back to the same situation of having a user account that doesn't get emptied periodically. I imagine I can set the quota to 0 MB to cause all incoming mail to my admin account to bounce, or setup an inbound rule to bounce everything, but this is starting to sound kludgy to me. Does anyone know of a more direct work around to copying a user's incoming mail to an outside server and then deleting the local copy w/o removing their account entirely? Or is this just wishful thinking? Thanks in advance. Scott

    Read the article

  • Starting my own server - basic recommendations and questions [closed]

    - by Ilia Rostovtsev
    Possible Duplicate: Can you help me with my capacity planning? I'm planning to start my own high-performance server and then use collocation services for keeping it up and running. I'm planning to USE it for processing videos and keeping big video site up! (using FFMpeg, MENcoder and etc.) I just need recommendations on whether listed hardware is good enough and will work together well and fast enough. Do I need anything else (missed something). I remember about CPU coolers though! ;) I'm planning to use SSD drives so please tell me if it's going to work just as regular HDDs (but much faster)? Are they going to be used as RAID (is this possible for SSDs)? Here is what I would like to get: Intel ® Server System SR1600URHSR (Urbanna) or Intel® Server System SR1695WBAC 2 x Intel Xeon X5650 4 x 16Gb DDR-III 1333MHz Kingston ECC Reg (KVR13R9D4/16) 3 x (or maybe 4x) 480Gb SSD Intel 520 Series (SSDSC2CW480A3K5) Which server system would be better? Is listed hardware new/good enough and worth buying it at the moment? Should I probably take a look at something slightly more expensive but more up to date and powerful, may be? After all as software I would like to use CentOS 6 64 bit + WHM/CPanel? Any other suggestions on maybe cheaper and same/more powerful server management system but WHM? What most important points to keep in mind when starting/maintaining your own server?

    Read the article

  • open source solution to a gateway for a network of a housing cooperative of 150 people

    - by SirDinosaur
    i just inherited a barely functioning network for a student housing cooperative of about 150 people. in it's current state, as i understand it from the previous person in charge of the network, we have working wireless access points and working ethernet cords going to working gigabit switches going to a barely functioning gateway (right now a simple home router) to one of three possible outbound connections. it is possible to connect to the network through the wireless or ethernet, but especially during peak hours, packets / connections are likely dropped or otherwise get no response. my intuition tells me to replace the gateway with something that can handle multiple outbound connections (WAN) and one inbound connection (LAN), while the rest of the network seems suitable for now. i'm somewhat knowledgable in Linux (been using Debian after first Arch Linux) and i want to use as much open source as possible, but i'm confused whether or not a simple server that i could easily understand will work for this situation. do i need specialized hardware to handle the switching more effectively? if so, what are my options? (i found this, thoughts?) or if a Debian server would work, anything else i should about the specs required for this type of server? also links to any useful information on using open source to maintain this type of network would be most appreciated. <3 P.S. crossposted http://redd.it/yybp2.

    Read the article

  • How would I setup iMail to forward a user's mail to another service w/o leaving a copy locally?

    - by Scott Mayfield
    I have an iMail 2006 server installation in which I have a particular user that has several aliases that all point to a single user (me, for the record). I've been copying all of my mail to GMail and reading it there, but it annoys me that I have to go back weekly and log into my mail account on iMail and delete between 6 and 10 thousand copies of messages I've already received, in order to keep my mailbox from filling up (yes, I have it set with no quota, but I consider it bad form to just let the box grow indefinitely). I've got the copying setup via an inbound user rule, but I'm wondering how to accomplish a "copy and delete" rule. The manual isn't clear on what happens with multiple matching rules (will they be processed in order, or is it a first match situation?) and there isn't a means to combine multiple actions into a single rule. If I use the "forward" action, I THINK that it's going to screw up all the sender information once the mail reaches my GMail account and show it as coming from me instead of the original senders (can anyone confirm that this is accurate?) An easy answer would be to delete my user account entirely, replace it with an alias that maps to my GMail account, but then I would lose my ability to log into the system for admin duties. So that leads me to creating a second, lesser known account for admin use, but since it's a real account, sooner or later I'm going to get mail sent to it and I'll be back to the same situation of having a user account that doesn't get emptied periodically. I imagine I can set the quota to 0 MB to cause all incoming mail to my admin account to bounce, or setup an inbound rule to bounce everything, but this is starting to sound kludgy to me. Does anyone know of a more direct work around to copying a user's incoming mail to an outside server and then deleting the local copy w/o removing their account entirely? Or is this just wishful thinking?

    Read the article

  • Internal/External Moodle - DNS

    - by Chief17
    Network diagram: I have a moodle (a VLE) setup that I want to be internally and externally accessible. The green route on the diagram below is the route I would like the traffic to take when the user is inside the LAN, and the red route is seemingly what it does take. The website has a domain name (like most websites do). From the User PC, if I ping the domain name, I get the internal IP of the webserver (because of a hosts file entry), if I nslookup the domain name I also get the internal IP of the webserver (because of an A record on my DNS server). Running the same two commands on the webserver gives me the webservers external IP. (going well so far) If I use PHPs gethostbyname() on the moodle website and use domain name as a parameter (getting php/apache to resolve the hostname) it returns the exernal IP of the webserver (good news, DNS seems to be doing what I want it to). All things so far seem to be going well. The only thing that is confusing me and preventing the moodle single sign on from working is the fact that if I get moodle to show my IP address, it says that it is an external one (outside my NATting firewall) when it should show an internal IP. This is the issue, any ideas on how to go about resolving this? Any ideas on tests I can perform (I have also tried a tracert and the request goes directly to the webserver), anything? Thanks all!

    Read the article

  • ffmpeg: converting an avi to a reduced, shareable flv/mp4...

    - by meder
    I recently followed a guide and recompiled my ffmpeg so x264 is enabled. I used some generic settings to convert my 700 MB avi file to a mp4 file, the result was a 407MB mp4 file. The original avi file's settings: Codec: DX50 Resolution: 704x304 Frame rate: 23.976023 Stream 1 Codec: mpga Type: Audio Channels: 2 Sample rate: 48000 Hz Bitrate 179 kb/s Command I used: ffmpeg -i input.avi -acodec libfaac -ab 128k -ac 2 -vcodec libx264 -vpre hq -crf 22 -threads 0 output.mp4 The settings of the output file (output.mp4): Codec: avc1 Resolution: 704x304 Display resolution: 704x304 Frame rate: 11.988011 Stream 1 Codec: mp4a Type: Audio Channels: 2 Sample Rate: 48000 Hz Bits per sample: 16 Bitrate: 1536 kb/s The quality of the output mp4 is pretty nice, it seems as if it's pretty much the same as the original source. However, I'm trying to reduce the filesize and I'm not really sure whether I should go with an flv format or keep it mp4. The advantage the flv would have obviously is that it would be playable with a flash player ( I have come across some swf players which take a flash parameter to play an flv file ).. but maybe I could use the video element, as I'm only going to be displaying this video privately so I don't have to worry about supporting legacy browsers such as IE. Can someone recommend some settings to specify in order for the filesize to be around ~100-150MB or so? I don't mind a reduction in quality, nor do I mind resizing it - I was going to do it initially but I wasn't sure what the guidelines were ( if any ) for dealing with resolution.. since this video's is 704x304 would it still be ok if I forced it into one that isn't perfectly fit for the aspect ratio? I have no clue about that part. I realize that I could have probably specified 28 instead of 22 for the CRF, I'm not sure if I should do that as opposed to maybe specifying smaller resolution, which might make it smaller as well?

    Read the article

  • Why can't I renew my dynamic IP address?

    - by qwerty
    So, I'm going to explain this from the start. I've started a project with a friend of mine which includes a webspider, that crawls through all pages on a site and stores them in a DB. Since I've never done this before, I didn't think about the amount of requests I was actually sending to the site, and after a day or two I finally got my IP blocked. I need to be able to visit that site as it's very important to me. Not only for my project, but for other reasons too. (and if I'm able to renew my IP I'm going to set a delay on the crawler so I don't get blocked & DDOS the site) I have a dynamic IP address, at least that's what my router settings say. I've tried ipconfig /flushdns, ipconfig /release, restart computer. No result. I end up with the same IP address. I've also tried renewing it from the router, however, I think it uses the same method which isn't working. Is it possible that site has blocked my mac address? Can a site even access my mac address?

    Read the article

  • Setup Exchange 2010 cannot verify Host (A) record warning

    - by Joost Verdaasdonk
    When I try to install Exchange 2010 on my server 2008 R2 server I get a warning during the prerequisites check: Warning: setup cannot verify that the 'Host' (A) record for this computer exists within the DNS database on server: 90.195.200.12. The goal of this Exchange setup is that I'm able to sent email in my local domain as well receive/sent email through the public domain name. Some information about my setup This Server is going to be a dedicated exchange host and has the following IP setup: (IP's are examples and not the real IP's ofc) Local VLAN NIC: IP: 10.10.50.22 Subnet: 255.255.255.0 No gateway DNS: 10.10.50.1 (is domain controler with authoritive DNS) public WAN NIC: IP: 90.195.200.148 Subnet: 255.255.255.235 Gateway: 90.195.200.145 DNS: 90.195.200.12 | 190.160.230.14 My public domain - exampledomain.com A record: mail - IP: 90.195.200.148 MX record IP: 90.195.200.148 As I'm seeing now the exchange setup is looking for the A record in one of the DNS servers in my Public WAN NIC. And ofc this is not where my A records are defined. I have those A records in 2 places: 1. In the domain controler DNS (the private nic) 2. In the online dns registration of my public domain (exampledomain.com) My question is... is this warning going to be a problem? Can I do something better in my setup so that this warning will go away? Please advice?

    Read the article

  • IIS login / logout interferes with media player

    - by Mark Sowul
    So I'm listening to music with Windows Media Player, going insane because the music randomly stops playback every so often. I finally notice that it correlates with instances of csrss, winlogon, and logonui repeatedly starting and quitting. I finally tracked that down to IIS repeatedly logging on and off due to WebDAV requests going through my user account (my laptop syncing up with OneNote over WebDAV). I see tons of spam in my security event log for the logins. I am surprised that IIS needs to log in this much. This has only been happening for a couple of months. I'm not sure where the actual problem lies - with IIS, or Media Player, or what, so I figured I'd try and find out if the IIS login behavior is actually abnormal. Is it normal for IIS to log in this much? And is it normal for that to keep spawning winlogon, csrss, and logonui every second or so? I see a constant stream of logon events in the event log every few seconds presumably while OneNote is syncing. Logon (id = 1, source = laptop), Special Logon (id = 1, sets up privileges), Special Logon (id = 1, seems to set up the same privileges), Logon (id = 2, same laptop), Logoff (id = 2), Logoff (id = 1) The DefaultAppPool (only one apparently in use) has its idle timeout set to 20 minutes, and load user profile set to false. Not sure what other settings (if any) might be relevant.

    Read the article

  • CloneZilla Broke My System? Ubunut Installation Lost After Running CloneZilla

    - by nicorellius
    I just read through this post and tried to get my installation back using this answer to no avail. What happened to me is this: I spent an hour or more reading through the CloneZilla docs. I thought I was ready to test it out so I burned the disc with the ISO image on it and ran it. The system I used was Ubuntu 10.04, 32-bit. Everything seemed to go fine. I made a clone of my first partition and copied it to my second partition. I followed the instructions, removed the disc and rebooted my system. At this point, I would expect to have two bootable Linux installations, identical to one another. However, upon booting, I got this error message: error: no such device: 4cf1a6ef-xxxx-xxxx-xxxx-4e3a3ce92bcd error: file not found I booted from a Live Ubuntu disc and was able to see my to partitions: 4cf1(1) and 4cf1(2) (abbreviated, because the volumes have long numbers to identify them). The 50 GB partition, on which the original Ubuntu installation sits is the number and the second partition (175 GB) is the same number with an "_" at the end. I could browse the disc partitions and see the files, but I'm not sure what to do next. I know there is a way to restore my grub loader and actually boot either of these installations, but my Linux know-how is limited. Can I edit the boot loader file to fix this problem? The only clue I have is CloneZilla said something about making a new GRUB but I thought it was going to basically modify it so I could boot either installation. Not sure what happened. I am going to look through this post for the time being to see if I can learn anything to help my problem. But I thought that, since this happened as a result of using CloneZilla, it may be a unique question for this board.

    Read the article

  • How many reverse proxies (nginx, haproxy) is too many?

    - by Alysum
    I'm setting up a HA (high availability) cluster using nginx, haproxy & apache. I've been reading great things about nginx and haproxy. People tend to choose one or the other but I like both. Haproxy is more flexible for load balancing than nginx's simple round robin (even with the upstream-fair patch). But I'd like to keep nginx for redirecting non-https to https among other things right at the point of entry to the cluster. On the other hand, nginx is a lot faster for serving static contents and would reduce the load on the powerful apache which loves to eat a lot of RAM! Here is my planned setup: Load balancer: nginx listens on port 80/443 and proxy_forwards to haproxy on 8080 on the same server to load balance between the multiple nodes. Nodes: nginx on the node listens to requests coming from haproxy on 8080, if the content is static, serve it. But if it's a backend script (in my case PHP), proxy forward to apache2 on the same node server listenning on a different port number. Technically this setup works but my concerns are whether having the requests going through several proxies is going to slow down requests? Most of the requests will be PHP requests as the backends are services (which means groing from nginx - haproxy - nginx - apache). Thoughts? Cheers

    Read the article

  • Mac OS X : Open up 3 terminals, run different commands from all for each of them, to set up a develo

    - by taelor
    I'm a Ruby on Rails Web Developer and there is a lot of repetition I go through to start up my development environment. I was wondering if there is any way that I can remove some of this repetition by writing a script, or using a program (like quicksilver) or something to get my work environment going. I know how to use quicksilver to open up terminal, and I even have a saved window group to get my 3 or 4 panes open. The next thing I would love to automatically happen is getting all three to goto a certain directory, and each run different commands. One will start the local server, and in another tab, start a background process. the other would open text mate, and then start a console session, while the last one runs a svn(or git) status. Oh yah, and I would love to go ahead and open firefox, and a few tabs going to a couple of locations. Does anyone have any suggestions on how I could make all this happen in once quicksilver command, or a double click on some type of script on my Desktop?

    Read the article

  • Resuming downloads in Firefox

    - by Kim
    Unfortunately, Firefox still has failed to add the option to resume downloads. I've ran into this problem SO MANY times, and in my previous searches I found posts saying Firefox was going to fix that. As of 3.6.3 they haven't. I just tried Free Download Manager (FDM), again, having the Firefox addon Flashgot use it. The download gets passed to FDM, and fails, giving the error message "access denied, invalid username or password." No password was required. The site I'm trying to get the file from is turbobit.net, which limits downloads speeds to 100kb/sec, and has a 59 second countdown before you get the link. I guess it's transparently using a password on their end. If I just download normally (save to disk) the download starts fine, but it fails after 30 minutes to 1 hour (always different), and my Wi-fi connection will stop briefly - and I have to start all over. So I will never be able to download a large file. I also tried DTA instead of FMD with Flashgot, and I get an "access denied" message in DTA. Again, I reloaded - waited the 59 seconds, and download w/Firefox, and the download starts fine. The failure message in the Firefox Downloads window is "source file at http... could not be read." Any help would be greatly appreciated. When is Firefox going to finally add the ability to resume downloads????? Is there some other software I haven't found using Google that will work?

    Read the article

  • Word 2007 won't run, tries to reinstall, fails with error 1402.

    - by eidylon
    Okay, this problem has been plaguing this computer for a while now. We tried googling, and none of the answers found helped to solve the problem. So, I am now posting the answer here for posterity. Office 2007 Home/Student edition was installed on the computer, running Vista (32-bit). One day, Word just up and stopped working. All the other programs continued to operate as expected. But every time you would click the icon for Word, it would pop up an install dialog, with a message reading "Preparing to install...". After a few minutes of the little progress bar going and going, it errors out, and gives error 1402, something to the effect of unable to access registry key HKEY_Local_Machine\Software\Classes\.wll\.... Searching around, every answer i found had to do with reassigning the permissions on this key, giving full rights to SYSTEM or to Everyone, and propagating the changes down to all sub-keys. When ever this was attempted though, it would tell us that we were unable to access the key due to permissions, even though we had run regedit as Administrator and are logged on with an administrative account. We also tried uninstalling Office and reinstalling it, as well as doing a repair install. Both these attempts also threw the same 1402 error. Also of note was that the executable for Word (winword.exe) was MIA and no longer to be found in the Office install directory.

    Read the article

  • Can I still restore partition table?

    - by Johannes Lund
    Once I was going to resize partitions on my Mac HD from Bootcamp. I changed my mind and was going to quit, but apparently I hit a button, which made every single mac partion dissapear, and windows 7 refused to restart and be reinstalled. The 1 TB large HD consists of 3 partions, I believe. Since I can't see their actual size (except bootcamp), this is how I recall it. Macintosh HD about 500GB (Somewhere around 700GB according to disk utillity, but 500 according to Finder, and 500GB was all I could access.) Lion Recovery disk Bootcamp 293.36 GB To fix this I connected my mac via target disk mode to a pc and ran TestDisk. However this is the results: Since I Don't have 10 reputation I cant post the image showing the testdisk results, so I post a link instead hoping it is ok. The two mac partitions' sizes are completely wrong, and BOOTCAMP isn't showing. I tested using disk utilities from the snow leopard dvd. There there is one 293.36 GB Mac OS Extended partition. Before I had the firewire cable for target disk mode I tried reinstalling windows. Without success I tried again formating BOOTCAMP. Was that a bad thing to do? Could it have overwritten data from Macintosh HD? Unfortunately I have no backup. I could bring it to some kind of computer repair firm though.

    Read the article

  • is ksplice production ready?

    - by faultyserver
    I would be interested to hear the serverfault community's experiences with Ksplice in production. Quick blurb from wikipedia: Ksplice is a free and open source extension of the Linux kernel which allows system administrators to apply security patches to a running kernel without having to reboot the operating system. and Ksplice can, without restarting the kernel, apply any source code patch that only needs to modify the kernel code. Unlike other hot update systems, Ksplice takes as input only a unified diff and the original kernel source code, and it updates the running kernel correctly, with no further human assistance required. Additionally, taking advantage of Ksplice does not require any preparation before the system is originally booted (the running kernel does not need to have been specially compiled, for example). In order to generate an update, Ksplice must determine what code within the kernel has been changed by the source code patch. So a few questions: How has the stability been? any odd issues that you have encountered with its 'rebootless live patching' of the kernel? Kernel panics or horror stories? I have been running it on a few test systems and so far its been working as advertised, but I am interested in what other sysadmins experiences have been with Ksplice before going 'all in' and deploying this on our production servers. So, anybody using Kspice in production? update: hmm, not seeing any real activity on this question after a couple of hours (besides some kind upvotes and favs). Maybe to spark some activity I'll also ask a few more questions and see if we can get this discussion going... "If you are aware of Ksplice, is there a reason you are not using it?" "Do you feel its still too bleeding edge, unproven or untested?" "Does Ksplice not fit well within your current patch-management system?" "Do you hate having systems that have long (and secure) uptimes?" ;-)

    Read the article

  • Hardware chose: ASUS Eee Pad Slider or ASUS Eee Pad Transformer for web development?

    - by JamesM
    I was just wondering out of the following Tablets which one seams better to get? I am a web-developer, Always using Unix/Linux/BSD, I want a tablet that has a keyboard. http://gdgt.com/asus/eee/pad/slider/ http://gdgt.com/asus/eee/pad/transformer/ http://www.tweaktown.com/news/18311/asus_eee_pad_slider_transformer_tablets_with_physical_keyboard/index.html I know both are similar, but not sure what one I should get. The Slider seems very nice but again the keyboard is fixed to the tablet unlike the Transformer. P.S: I'm going to use one of the above to showcase my programming work at school, as well as just being used as a cheaper notebook than the $300 Windows.7 locked down notebooks. By Locked down, I mean we pay $300 for them and after 3 years we can do what ever to them, they are Lenovo thinkpad mini-10 and What they have installed is all you get, they don't let us install what ever OS on them. And with the question on both of those links, I think that the transformer would be better but that is only taking in the fact of it being both a tablet and a notebook. What I really care about is power; which one is more powerful? It will be running kFreeBSD-Debian-Squeeze with Linux-Mint theme with several other packages. Though I'm not going to run Windows (which I feel is bloated), I still want power. To help keep my computer from slowing down with cache, I will have a cron.d/hourly script cleaning out the cache memory.

    Read the article

  • White Screen, No Errors.

    - by GruffTech
    So.. Interesting problem for you guys, As I'm completely lost as to what to do, or where to take the next step. Server & Application Environment. CentOS release 5.3 (Final) Apache 2.2.3-22 EnableSendfile off EnableMMAP off ErrorLog logs/error_log LogLevel debug PHP-5.2.6-2 error_reporting = E_ALL display_errors = on log_errors = on max_execution_time=300 max_input_time=60 memory_limit=512mb Kohana 2.3 PHP Environment. HAProxy 1.3.15.6-2 MemCacheD 1.2.6-1 Our application is split between 3 web servers, mounting a NFS Storage server, and sticky load balancing between the 3 web servers. The application seemingly runs great, but every so often, instead of loading, the application just shows a pure white page. Not a 404 Error, or a 500 Server Error, a clean white page. And it returns instantly, so its not a execution time error. Nothing in the Error log, or Server-Error Log, Proxy log shows standard proxied connection, Just the standard 200-Status in Access log, with 256 bytes transferred. To me, this leads to tell me that the application itself is having a problem. A rare, unexplainable, seemingly random, problem that causes what we've now called the "White Screen of Death." Our developers all say that since there is nothing going to our error logs, that it must be a server problem. But I say the same thing, There's nothing going to ANY of our logs (relevent to this anyway), and we're not having httpd children crash from what i can tell. Any ideas on how i can increase my logs, or somehow prove that its not a bug in PHP, Apache, CentOS, ect? Or if it is somehow a bug, identify it?

    Read the article

  • How to limit SMTP delivery to hourly batches

    - by Jeremy W
    Moved over from StackOverflow. Sorry if you saw it there first In an effort to keep us from being labeled spammers by major ISPs (in addition to SPF records, privacy policies, CANSPAM compliance and the like) - I wanted to limit the amount of mail we send out an hour. Is this possible in W2K3 SMTP server? I was looking at outbound connection properties in the SMTP virtual server config screens...It's just not that clear if tinkering with those settings are going to do what I want. In a nutshell, I'd love mail being sent by this server to queue up and send for example, 5,000 messages every 10 minutes or so. Mail is being sent via ASP.Net. Also, I wouldn't be sending 1 million a day. Probably 30,000 tops - and doing that only a few times a month. I'm just trying to avoid a tidal wave of 30k going out in 1 minute and setting off every network spam monitoring alarm in North America. I know I could do it with a combination console app / scheduled job. My question was if there was an easier way to accomplish this with the Virtual SMTP Server settings on Win2k3 Is this possible?

    Read the article

  • Windows 7 BSOD when I walk away from the computer

    - by bobobobo
    I have a really weird experience with this new AM3 box I set up. Everything seems fine at first. But now I get frequent (daily) BSOD's, mostly when I walk away from the computer for more than 5 minutes (when the computer is idle). The BSOD's as shown by BlueScreenView almost always have to do with ntoskrnl.exe, and they'll do with really normal sounding operations, like SYSTEM_SERVICE_EXCEPTION, NTFS_FILE_SYSTEM, DRIVER_IRQL_NOT_LESS_OR_EQUAL, KMODE_EXCEPTION_NOT_HANDLED, BAD_POOL_HEADER - You name it. These are basically them, and they repeat after that, just random order. Its not like one is more consistently the problem than the other. I have Windows update ON and I let it do its updates everytime it wants to. I turned windows indexing service off, but it seems Windows 7 does a lot of background processing when I'm away - I'll come into the room and the fan will be going nuts (I'm pretty sure this isn't because the computer flies into a panic whenever I leave). I tried finding updates for my ASUS, and really there isn't much to install (except some new driver firmware). I'm going to install that to see if it helps, but what else could it be? Is it possible its a hardware issue with the board? Or is every Windows 7 user experiencing daily BSOD these days that I don't know about?

    Read the article

  • Barriers to IPv6 deployment: addressing

    - by sysadmin1138
    There are several things that are keeping IPv6 deployment from being a topic of active discussion here at my work. There are the usual technical issues, but one non-technical one appears to be a major stumbling block on the path to actually getting a deployment project going. Addresses, memorizing of. Specifically, IPv4 addresses are comprehensible, and IPv6 addresses just look like a big long string of hex. The human mind has real trouble memorizing lists of more than 7-8 items, and an IPv4 address (192.168.231.148) has four items in it which makes it easy for us to memorize. A fully populated IPv6 address has not only 8 sections, but each section has 4 hex digits in it. IPv6 addresses were not designed for memorization. To the technician who knows that the DNS server is at 192.168.42.42 (or more likely "42.42", since the company prefix is likely memorized), the idea of memorizing an IPv6 address fills them with dread. Which in turn makes them much less enthusiastic about participating in an IPv6 deployment project. Because of how our network works we're not fully dynamic in terms of v4 addressing. We have several to many subnets that are entirely statically assigned for a variety of reasons, chief among them being that the overhead of static DHCP assignments is perceived as being too great. Also, some devices still aren't smart enough to pull DNS addresses out of DHCP while also having a static assignment, and therefore require manually configured DNS settings. Therefore, some v6 address memorization will have to be done. We're not under any mandate to get v6 out the door, so we don't have pressure from the top. However, it is time to start prepping our infrastructure to handle IPv6 even if we don't convert wholesale. For those of you who have been in IPv6-land for a while, what short-cut methods do you use to discuss or keep track of subnets and specific/critical IP addresses? If I can help reduce some of the dread surrounding IPv6 we might get the project going.

    Read the article

  • 10GE network: Is it still deadly expensive? Any options?

    - by BarsMonster
    Hi! I am building home cluster where I going to have about 16 nodes which can live with 1G ports, but I really want to have 10GE on file server & central node. It's all local, so no need for cabels longer than 3-5m. And ofcourse I want to spend as little money as possible (not going to spend more than whole cluster costs) :-) What are my options? 1) Legacy solution is to take some 24-48 port 1GE switch, and connect to file/central nodes via 4-8 aggregated links. This will work I guess, cost is very acceptable, but I am not sure if it's ok to use that much aggregated links. And ofcourse it would be hard to double bandwidth when needed... :-D 2) Switch with several 10GE uplink 'ports'. As far as I see, they all require modules which costs about 1000$, so I will need 4 10G modules, and 2 10GE cards... Smells like way more than 5000$+... 3) Connect file & central node via 2 10G cards directly, and put 4 quadport 1GE NICs on fileserver. I am saving on 2 10G modules and a switch, fileserver will have to do packet routing, but it's still gonna have alot of CPU's left :-) 4) Any other options? Infiniband? 5) Are MyriNet adaptors works fine? I guess there are no cheaper options? 6) Hmm... Scrap fileserver, put it all on central node and provide dedicated 1GE port for each of the nodes... This is sad...

    Read the article

  • How do you change a Cisco ASA 5510 management interface?

    - by Sam Sanders
    I want to add a redundant interface to my Cisco ASA 5510. The management interface is currently using Ethernet0/1 (10.1.25.254/24) one of the interface I want to use for the redundant interfaces. So I wanted to setup Management0/0 as the new management interface. The other interface I want to use is Ethernet0/2 (10.1.0.254/24) for the redundant interface. The Ethernet0/3 (10.1.251.5/24) interface is not going to be part of the redundant interface. I gave the Management0/0 an IP address of 10.1.254.5, and was able to connect a win7 box to Management0/0 and use 10.1.254.5 as a gateway; and ping another address on the (10.1.251.0/24) network, but I can't ping the interface (10.1.254.5) itself. I also can't use ASDM/SSH to log onto the ASA at 10.1.254.5. I setup rules in Configuration Device Management Management Access ASDM/HTTPS/Telnet/SSH. That look like the original rules for the Ethernet0/1 interface. The last thing I can think to try would be to change the Configuration Device Management Management Access Management Interface. I'm a bit nervous about changing it, the description of it is a bit vague. What it's going to do if I change it? What is the correct way to change a management interface?

    Read the article

  • Drupal 7: One-time user account

    - by Noob
    I'm going to create a survey in Drupal 7 with the webform module, installed on a debian system which may be adapted in every way. The users (personally known, approx. 120) doing that survey will walk into a room and complete the survey in browsers on different computers. After that, they'll leave the room and other persons will enter, complete the survey on the same computers and so on. Each user may enter only one submission. The process needs to be anonymous, i. e. I mustn't have any idea of who did wich submission. My current solution is to generate random one-time-passwords and hand out one password per user (without noting who got which password). Within the survey there will be a password field where the one-time-password is entered. The value is checked by webform to be unique. I'll get the data via csv or Excel and verify the passwords manually in excel by comparing them to the list of valid passwords. The problem is: I don't like the idea of manually generating the password list, copying it to excel and doing a manual check. That's a good idea for one-time-use, but we're going to repeat the survey every once in a while. I'd rather generate one-time-logins (like user0001/fdlkjewf, user0002/dfrefnnr, ...) for each survey, hand them out to the users and let drupal/debian/whatever check whether a submission is valid or not. Do you have any idea how to batch-generate about 120 users with one-time-passwords in Drupal 7 and verify that each user may submit the form only once? Do you even have a better idea how to accomplish the task within the intranet? Thank you for your help.

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >