Search Results

Search found 10961 results on 439 pages for 'internal dns'.

Page 360/439 | < Previous Page | 356 357 358 359 360 361 362 363 364 365 366 367  | Next Page >

  • Is it possible for a faulty processor to cause audio static/noise?

    - by Tom
    I have a Core 2 Extreme processor I received from a friend and have set up an XBMC box using it. However, I constantly get audio static whenever playing any music or videos. Here is a video of the sound: http://www.youtube.com/watch?v=SqKQkxYRVA4 I have tried replacing everything short of the case and the processor, including cables, audio interfaces, operating systems, ram, etc, leading me to think it might be either the case shorting out the motherboards I have tried or a faulty processor. Is it possible for a faulty processor to cause audio static/noise? Any feedback would be appreciated. Edit - Here's a list of things I have tried: Reinstalling OS Installing/upgrading/repairing PulseAudio/Alsa Installing alternate OSes, straight Ubuntu, Lubuntu, Xubuntu, Arch, Mint, Windows 7 Switching audio from the external card to internal Optical, audio out through HDMI, audio out through headphones Different ports on receiver (my main desktop sounds fine on the same sound system) Different optical cables Unplugging everything unnecessary from the motherboard (1 HD, 1 Stick of Ram, 1 Keyboard) Swapping out ram Swapping out the motherboard Replacing the Graphics Card (was replaced due to fan being noisy, not specifically for this problem) Different harddrives Swapping power supply Disabling onboard audio Switching Power Cable Plugging in through surge protector Plugging into different outlet on separate circuit

    Read the article

  • Can not copy files after installing windows

    - by Ali
    I am experiencing a weird problem. I was running Xubuntu on my laptop until yesterday that I had to delete Xubuntu and install Windows. I had a NTFS partition on my Xubuntu that I kept some files on it. Today after installing windows I wanted to move all the files from that partition to an external HDD. I selected all files and folders and clicked on Copy, then I went to the HDD and clicked on paste but nothing happened. I can not do that. I do not know why. I copy the files, and wherever I click paste, nothing happens. If I try to copy the files and folders one by one, I can copy some of them, but some of them do not move. The other problem I have is that I can not open some files, in particular pdf files. When I click on pdf files I get this error: There was an error opening this document. This file cannot be found. Also, I cannot play some mp4 files. I can not open some jpg and txt files. I get this error The directory name is invalid. So in summary, after removing Xubuntu and installing windows 7 I have the following problems with one of the NTFS partitions on my internal drive: Can not copy or cut all folders and files from that partition to any other partition - I also do not get any errors. Can copy some folders and files Can not access some pdf, jpeg, txt and mp4 files and get the above errors. I should also mention I did not change anything for this partition during the installation or formatting the other partitions.

    Read the article

  • Virtualbox port forwarding with iptables

    - by jverdeyen
    I'm using a virtualmachine (virtualbox) as mailserver. The host is an Ubuntu 12.04 and the guest is an Ubuntu 10.04 system. At first I forwarded port 25 to 2550 on the host and added a port forward rule in VirtualBox from 2550 to 25 on the guest. This works for all ports needed for the mailserver. The guest has a host only connection and a NAT (with the port-forwarding). My mailserver was receiving and sending mail properly. But all connections are comming from the virtualbox internal ip, so every host connection is allowed, and that's not what I want. So.. I'm trying to skip the VirtualBox forwarding part and just forward port 25 to my host only ip of the guest system. I used these rules: iptables -F iptables -P INPUT ACCEPT iptables -P OUTPUT ACCEPT iptables -P FORWARD ACCEPT iptables -t nat -P PREROUTING ACCEPT iptables -t nat -P POSTROUTING ACCEPT iptables -A INPUT --protocol tcp --dport 25 -j ACCEPT iptables -A INPUT -i lo -j ACCEPT iptables -A INPUT -s 192.168.99.0/24 -i vboxnet0 -j ACCEPT echo 1 > /proc/sys/net/ipv4/ip_forward iptables -t nat -A PREROUTING -p tcp -i eth0 -d xxx.host.ip.xxx --dport 25 -j DNAT --to 192.168.99.105:25 iptables -A FORWARD -s 192.168.99.0/24 -i vboxnet0 -p tcp --dport 25 -j ACCEPT iptables -t nat -A POSTROUTING -s 192.168.99.0 -o eth0 -j MASQUERADE iptables -L -n But after these changes I still can't connect with a simple telnet. (Which was possible with my first solution). The guest machine doesn't have any firewall. I only have one network interface on the host (eth0) and a host interface (vboxnet0). Any suggestions? Or should I go back to my old solution (which I don't really like). Edit: bridge mode isn't an option, I have only on IP available for the moment. Thanks!

    Read the article

  • Good visuals supporting adopting Macintosh in a Windows company

    - by jdmuys
    I work in a Windows only software service company, which just put up an internal contest for innovative ideas for the company. The idea I submitted is to let employees use a Mac instead of the mandatory PC if they wished to. My idea has been selected (among a few others) to reach the next stage of the contest. One of the items requested for the next stage is ONE visual that best illustrates the idea. While my pitch is rather good (I think), I have a hard time coming up with ONE visual that would be suggestive enough and not too fanboy-ish, or too restricted. That's why I am requesting suggestions. For reference, some of the points I intend to develop are (not in order): de facto safety (little or no malware) Apple as a company reached its leading position through innovation (bio)diversity is a source of value for a service company, that expands its reach. it makes financial sense the Mac is the most compatible machine, making it a lot easier to test our software (especially web sites). Some OS X technologies can be valuable to a software service company (eg Applescript) Some Apple tools can help us improve (eg Keynote) It's good citizenship for our company as Apple is now best in class according to Greenpeace. I realize this question may be out of topic here. I'd be happy to have suggestions on where to post this question. Please do not argue why OS X might be better or worse than Windows. My question is very narrow. Thanks.

    Read the article

  • Attaching 3.5" desktop drive to MacBook SATA

    - by Kyle Cronin
    I have a mid-2007 MacBook that, according to the Apple Store, has suffered some liquid damage and requires a new logic board to operate correctly, a ~$750 repair I've been told (would normally be around ~$300 were it not for the "liquid damage"). The unit itself works fine - the only problem I've been having is that the system does not recognize the battery and will not charge it. Curiously, the system can still be powered by the battery and even recognizes when the power cord is detached by diming the backlight, but I digress. Now that this laptop will likely become a desktop, I'm wondering if it might be possible to attach a desktop drive. I recently purchased a 2TB SATA drive and I'm wondering if it's possible to somehow attach it where the current internal drive connects. Obviously the drive itself will not fit inside the device, but as the unit will spend the rest of its days on my desk, that's not really much of an issue. My main questions are: Is this possible? If so, how would I connect the drive? Would a SATA extender cable work? Is the SATA port on my MacBook capable of powering a desktop drive? Or should I just get a SATA male-to-female cable and see if I can power the drive through other means (a cheap power supply, for example) The disk I'm referring to is the Hitachi Deskstar HD32000. Though I couldn't find that exact model on Hitachi's support site, these are the power requirements for a similar drive, the 7K2000 (2TB, 7200RPM, SATA II): Power Requirement +5 VDC (+/-5%) +12 VDC (+/-10%) Startup current (A, max.) 1.2 (+5V), 2.0 (+12V) Idle (W) 7.5 From what I've read, 2.5" drives require 5V, meaning that my MacBook obviously is capable of producing it. The specs seem to suggest that this drive seems capable of accepting it instead of the typical 12V - is this an accurate interpretation of the power requirements? Or does it need both 12V and 5V?

    Read the article

  • How to import a text file into powershell and email it, formatted as HTML

    - by Don
    I'm trying to get a list of all Exchange accounts, format them in descending order from largest mailbox and put that data into an email in HTML format to email to myself. So far I can get the data, push it to a text file as well as create an email and send to myself. I just can't seem to get it all put together. I've been trying to use ConvertTo-Html but it just seems to return data via email like "pageFooterEntry" and "Microsoft.PowerShell.Commands.Internal.Format.AutosizeInfo" versus the actual data. I can get it to send me the right data if i don't tell it to ConvertTo-Html, just have it pipe the data to a text file and pull from it, but it's all ran together with no formatting. I don't need to save the file, i'd just like to run the command, get the data, put it in HTML and mail it to myself. Here's what I have currently: #Connects to Database and returns information on all users, organized by Total Item Size, User $body = Get-MailboxStatistics -database "Mailbox Database 0846468905" | where {$_.ObjectClass -eq “Mailbox”} | Sort-Object TotalItemSize -Descending | ft @{label=”User”;expression={$_.DisplayName}},@{label=”Total Size (MB)”;expression={$_.TotalItemSize.Value.ToMB()}} -auto | ConvertTo-Html #Pause for 5 seconds for Exchange write-host -foregroundcolor Green "Pausing for 5 seconds for Exchange" Start-Sleep -s 5 $toemail = "[email protected]" # Emails report to this address. $fromemail = "[email protected]" #Emails from this address. $server = "Exchange.company.com" #Exchange server - SMTP. #Email the report. $email = New-Object System.Net.Mail.MailMessage $email.IsBodyHtml = $True $email.To.Add($toemail) $email.From = $fromemail $email.Subject = "Exchange Mailbox Sizes" $email.Body = $body $client = New-Object System.Net.Mail.SmtpClient $server $client.UseDefaultCredentials = $true $client.Send($email) Any thoughts would be helpful, thanks!

    Read the article

  • Performance data collection for short-running, ephemeral servers

    - by ErikA
    We're building a medical image processing software stack, currently hosted on various AWS resources. As part of this application, we have a handful of long-running servers (database, load balancers, web application, etc.). Collecting performance data on those servers is quite simple - my go-to- recipe of Nagios (for monitoring/notifications) and Munin (for collection of performance data and displaying trends) will work just fine. However - as part of this application, we are constantly starting up and terminating compute instances on EC2. In typical usage, these compute instances start up, configure themselves, receive a job from a message queue, and then get to work processing that job, which takes anywhere from 15 minutes to over 8 hours. After job completion, these instances get terminated, never to be heard from again. What is a decent strategy for collecting performance data on these short-lived instances? I don't necessarily need monitoring on them - if they fail for whatever reason, our application will detect this and handle re-starting the job on another instance or raising the flag so an administrator can take a look at things. However, it still would be useful to collect information like CPU (user, idle, iowait, etc.), memory usage, network traffic, disk read/write data, etc. In our internal database, we track the instance ID of the machine that runs each job, and it would be quite helpful to be able to look up performance data for a specific instance ID for troubleshooting and profiling. Munin doesn't seem like a great candidate, as it requires maintaining a list of munin nodes in a text file - far from ideal for an environment with a high amount of churn, and for the short amount of time each node will be running, I'd rather keep the full-resolution data indefinitely than have RRD water down the data over time. In the end, my guess is that this will require a monitoring engine that: uses a database (MySQL, SQLite, etc.) for configuration and data storage exposes an API for adding/removing hosts and services Are there other things I should be thinking about when evaluating options? Perhaps I'm over-thinking this, though, and just ought to run sar at 1-minute intervals on these short-lived instances and collect the sar db files prior to termination.

    Read the article

  • 2010 cgi script failure

    - by Barry F
    Hi. I hope you can help, I'm just a beginner! I have listed a few extra details which may not be relevant. I upload cgi scripts onto local/personal directory on a Apache/2.2.10 server, using FTP95Pro in ASCII. The scripts execute correctly using perl on my web-server in a terminal session. Thus my code has no fatal syntax errors. Webpages 'action' each cgi script at /cgi-bin/. There are symbolic links which link system directory files to my local directory files. FollowSymLinks is enabled (unsure how). Permissions are correct (755). This set-up hasnt changed, apparently. The scripts have excuted perfectly for years, up to 2010. But now, in 2010, I have replaced working scripts with new script/files, now with exactly the same text, filename and permissions. Only the date (last modified) has changed. But now I receive a 500 Internal Server Error, and cannot determine why. My server administator assumes I have code errors. But code is unchanged since last year, and it runs fine (albeit no arguments) on web-server console using perl myscript.cgi Is there anything you can think of which may have changed ? I'm suspicious of the new decade. I think the server swapped from Linux to Windows OS last year, but my server administrator got it all working OK. Is there something unusual he may have missed, related to 2010 ? Thank you in advance

    Read the article

  • Exchange 2010 Transport rules stepping on each other

    - by TopHat
    I have a group of users that I have to restrict email access for and so far using Exchange Transport Rules has worked very well. The problem I am having is that Rule 0 is supposed to bcc the email to a review mailbox but otherwise not change anything and Rule 9 is supposed to block the email and throw a custom NDR to tell the user why they were blocked. Here are my results in practice however. If Rule 0 is enabled and Rule 9 is enabled then only Rule 9 functions If Rule 0 is disabled and Rule 9 is enabled then Rule 9 functions If Rule 0 is enabled and Rule 9 is disabled then Rule 0 functions This is after the Transport Service has been restarted (multiple times actually). I have other rule pairs that work correctly. None of these are overlapping rulesets however. - copy email going to address outside domain and then block - copy email coming in from outside and then block Here is the rule for copying internal emails (Rule 0): Apply rule to messages from a member of Blind carbon copy (Bcc) the message to except when the message is sent to a member of or [email protected] Here is the rule to block the same email (rule 9): Apply rule to messages from a member of send 'Email to non-supervisors or managers has been prohibited. Please contact your supervisor for more information.' to sender with 5.7.420 except when the message is sent to , [email protected], The distribution group used for membership in these rules is used for the other blocking and copying rules and works as expected. Is there something I missed in this setup? All of the copy rules are at the front of the transport rule group and all the actual copies at at the end of the queue if that makes a difference. Any thoughts as to why the email doesn't get copied when it gets blocked?

    Read the article

  • Apache HTTP Server+Tomcat: Which file generates mod_jk.conf, how to modify generated stuff, and how does httpd reach it?

    - by Sk8erPeter
    I'm using XAMPP with Apache HTTP Server and Tomcat Add-On installed. There's a default mod_jk.conf which is generated by Tomcat when starting it. But which file generates this mod_jk.conf file? How can I modify default values? By default, it looks like this: pastebin - mod_jk.conf. How does Apache HTTP Server reach this file? I can't see any reference to this file when looking into httpd.conf. When I put a VirtualHost in my httpd.conf file, and I put the line JkMount /* ajp13 into it, Apache HTTP Server service can't start (causes a 7024 event id error in Event Viewer (with error code 1, but nothing specific), but puts no error messages into error.log. The VirtualHost looks like this: pastebin - VirtualHost + JkMount. This way Apache HTTP Server can not start. If I comment out the line JkMount /* ajp13, it starts without a problem. BUT if I put the following line, which is the same as in mod_jk.conf, before the mentioned VirtualHost again, the service can start! <IfModule !mod_jk.c LoadModule jk_module "C:/xampp/tomcat/xampp/apache/modules/mod_jk.so" </IfModule Why do I have to put this line in again? Why does that happen, that the http://localhost/example does work, so this query is redirected to AJP13, but I have to put the LoadModule line in again in another file? EDIT: I don't have a clue why, I surely modified something, but now /example doesn't work either... And the config above gives a 500 Internal Server Error... :S Thanks!

    Read the article

  • Make UEFI, GPT, Bootloader, SSD, USB, Linux and Windows work together

    - by user129552
    I like to use the latest hardware and the latest software; thus I have a Laptop (Lenovo X220) with UEFI instead of BIOS an SSD instead of an HDD GPT partitioning scheme instead of MBR USB to boot from instead of optical disks. I need to use both Windows and Linux. I tried to make them work alongside, but I didn't succeed. Most Linux distribution isos don't even really work on UEFI systems booted from USB. (Not even the self-claimed cutting-edge Fedora. I also tried Linux Mint Debian Edition and Sabayon Linux (according to this guide) which did not work. Only Ubuntu worked for me. I first installed Windows 8 which created sda1: Recovery, sda2: EFI system, sda3: msftres, sda4: NTFS Windows. Windows worked without a problem. I then created sda5: linux-swap and installed Ubuntu into sda6: btrfs. After rebooting, I was not presented GRUB2 as expected, but instead my system just booted into Ubuntu. I could no longer access Windows. After fixing dpkg in btrfs Ubuntu, I followed the Ubuntu documentation on UEFI booting. The result left me with a broken GRUB2, but interestingly, when I wanted to select the device to boot from, I was not only presented the internal SSD, an attached USB device, or LAN, but also Grub2 (broken), Ubuntu and Windows. The result is not very satisfying to me. What would I have to do to fix everything? Or differently asked, what operating system should I install at what point given my possibilities and requirements, so that I have a working bootloader in my UEFI GPT system which presents me a working Linux and Windows.

    Read the article

  • Creating multiple SFTP users for one account

    - by Tom Marthenal
    I'm in the process of migrating an aging shared-hosting system to more modern technologies. Right now, plain old insecure FTP is the only way for customers to access their files. I plan on replacing this with SFTP, but I need a way to create multiple SFTP users that correspond to one UNIX account. A customer has one account on the machine (e.g. customer) with a home directory like /home/customer/. Our clients are used to being able to create an arbitrary number of FTP accounts for their domains (to give out to different people). We need the same capability with SFTP. My first thought is to use SSH keys and just add each new "user" to authorized_keys, but this is confusing for our customers, many of whom are not technically-inclined and would prefer to stick with passwords. SSH is not an issue, only SFTP is available. How can we create multiple SFTP accounts (customer, customer_developer1, customer_developer2, etc.) that all function as equivalents and don't interfere with file permissions (ideally, all files should retain customer as their owner)? My initial thought was some kind of PAM module, but I don't have a clear idea of how to accomplish this within our constraints. We are open to using an alternative SSH daemon if OpenSSH isn't suitable for our situation; again, it needs to support only SFTP and not SSH. Currently our SSH configuration has this appended to it in order to jail the users in their own directories: # all customers have group 'customer' Match group customer ChrootDirectory /home/%u # jail in home directories AllowTcpForwarding no X11Forwarding no ForceCommand internal-sftp # force SFTP PasswordAuthentication yes # for non-customer accounts we use keys instead Our servers are running Ubuntu 12.04 LTS.

    Read the article

  • Why is writing to my external hard drive slow, while benchmarks show fast writing?

    - by matix2267
    I have an iOmega eGo 320GB portable drive connected through USB2.0 to my laptop running Windows Vista. It's been working fine for quite some time until recently it became very slow when writing e.g. when copying ~300MB movie over to the drive at first it is extremely fast but it actually doesn't write it only puts in cache and then hangs on last 10-20MBs for about a minute. When copying larger files it's the same story: starts fast but then slows down to ~5MB/s (sometimes even slower down to 2MB/s). Strange thing is that I have always had caching disabled for this drive (it was disabled by default and I never bothered changing it). At first I thought that the disk is dying so I checked S.M.A.R.T. values and everything is fine there. I also run chkdsk and it seemed to fix the problem - it worked fast for a few minutes but then it slowed down again. I also tried plugging it into another USB port - no difference. Additionally I noticed that reading under certain circumstances is sometimes slower e.g. loading times for some games are ~10 times longer, whereas simple copying files from this drive to my internal HDD is fast. I ran a speed benchmark using CrystalDiskMark with a 5x100MB run and strangely got these results: read write (MB/s) Seq 33.05 28.25 512k 17.30 15.27 4k 0.267 0.372 4kQD32 0.510 0.260 This is different from what most other people have (I've found many threads about slow disk write while googling but all of them were slow on benchmarks too) which is why I decided to post this problem here. BTW most of the time when writing (or sometimes reading) the activity led is mostly idle (blinks a while and then stops for longer, sometimes has slower blinks ~1 sek, sometimes goes off for a few seconds - extremely long blink :) ) but when benchmarking, defragmenting or just reading (copying from this drive, installing apps from installers there, watching HD videos) it is blinking really fast (like it should) and there are no slowdowns. It shouldn't be driver issue unless stock Windows drivers have some issues I'm not aware of.

    Read the article

  • Removing a device in "removed" state from Linux software RAID array

    - by Sahasranaman MS
    My workstation has two disks(/dev/sd[ab]), both with similar partitioning. /dev/sdb failed, and cat /proc/mdstat stopped showing the second sdb partition. I ran mdadm --fail and mdadm --remove for all partitions from the failed disk on the arrays that use them, although all such commands failed with mdadm: set device faulty failed for /dev/sdb2: No such device mdadm: hot remove failed for /dev/sdb2: No such device or address Then I hot swapped the failed disk, partitioned the new disk and added the partitions to the respective arrays. All arrays got rebuilt properly except one, because in /dev/md2, the failed disk doesn't seem to have been removed from the array properly. Because of this, the new partition keeps getting added as a spare to the partition, and its status remains degraded. Here's what mdadm --detail /dev/md2 shows: [root@ldmohanr ~]# mdadm --detail /dev/md2 /dev/md2: Version : 1.1 Creation Time : Tue Dec 27 22:55:14 2011 Raid Level : raid1 Array Size : 52427708 (50.00 GiB 53.69 GB) Used Dev Size : 52427708 (50.00 GiB 53.69 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Fri Nov 23 14:59:56 2012 State : active, degraded Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 Name : ldmohanr.net:2 (local to host ldmohanr.net) UUID : 4483f95d:e485207a:b43c9af2:c37c6df1 Events : 5912611 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 0 0 1 removed 2 8 18 - spare /dev/sdb2 To remove a disk, mdadm needs a device filename, which was /dev/sdb2 originally, but that no longer refers to device number 1. I need help with removing device number 1 with 'removed' status and making /dev/sdb2 active.

    Read the article

  • Getting prompted for password accessing page through script even when client and server are in same

    - by Munawar
    I'm trying to pull up an internal webpage in automated fashion using the methods in 'Internetexplorer.Application' using vbscript. But I'm getting prompted for password, although the client and the server both are in the same domain. Predictably when I manually try to access the web page, I don't have any problem. Only when I try using cscript.exe or iexplore.exe, I get prompted. I'm trying to automate some of the smoke test we do after a new build is deployed. But this password prompt is getting in the way. Following are the system specs Client machine - IE 7.0, OS is Windows server 2003 Server machine - Windows Server 2008 Both are in the same domain. So far I've unsuccessfully tried following to automate the password input system.diagnostics.process.start var WinHttpReq = new ActiveXObject("WinHttp.WinHttpRequest.5.1"); WinHttpReq.Open("GET", "http://website", false); WinHttpReq.SetCredentials("username", "password", 0); Nothing seems to work I checked in IIS. we have only anonymous and forms authentication enabled Is there any configuration setting in the client machine that can be tweaked to bypass this, although I'd hate to do it since you step on the toes of twenty people trying to do that. Preferable way would be to programmatically input it if its possible. Also, if you can suggest a more appropriate forum, that'd be great too. Please help.

    Read the article

  • Access node.js local server though mobile via same shared wifi

    - by laggingreflex
    EDIT: I was stuck in this situation before but then it was Apache-related But this time I'm using NodeJS, so the old answer doesn't help. I'm running apache a NodeJS webserver (on port 80) on Windows 7. I want to access the webserver through my mobile which shares the wifi router with my pc locally. http://localhost works from PC. But I can't access http://192.168.1.4 from either my phone or even my computer. ipconfig /all on my computer lists my ip address as 192.168.1.4 Wireless LAN adapter Wireless Network Connection: IPv4 Address. . . . . . . . . . . : 192.168.1.4(Preferred) I can ping my phone's (internal) ip address [192.168.1.5] from PC and vice-versa, I can ping my PC [192.168.1.4] from my phone. So why can't I access http://192.168.1.4 from my phone? (or PC) Firewall is off.

    Read the article

  • Comprehensive solution for managing patches, event viewing, change management, inventory, etc

    - by Holocryptic
    I'm looking for a solution that incorporates most or all of the following: Patch Management, Server event viewing/tracking, AD change management, ticketing and internal/external kb, remote access - ability to shadow user sessions or create new ones, imaging, and inventory. Our environments contains Windows Servers and ESXi Hosts (We're not completely virtual, but we're moving that direction). Various Cisco and Linksys switches and firewalls. This is a tall order, and I don't know if it can be done on a reasonable budget. I've looked and found some questions on SF that deal with some of this: http://serverfault.com/questions/72015/active-directory-management-tools-for-medium-sized-forest-less-than-1000-users http://serverfault.com/questions/4021/are-there-any-tools-to-do-change-management-with-active-directory-group-policy http://serverfault.com/questions/21752/what-is-a-good-patch-update-management-server What I'm ideally looking for is a reasonably cheap solution that integrates the features into a central interface. We're a non-profit, so money is a limiting factor (the cheaper, the better; but we have a max of $15k). What we are trying to avoid is having to deal with multiple vendors, while maintaining scalability (we're creating more sites that we'll have to manage). Is this possible, or will we have to cobble together something to make it work for us?

    Read the article

  • What to look for in a switch with LAN/WAN verses an iSCSI SAN?

    - by Luke
    I'm setting up a VMWare ESXi 5 environment with 3 server nodes. Dell recommended 2x Force10 S60 switches shared (iSCSI SAN, LAN/WAN). The S60 switches are extremely powerful. They have 1.25 GB of buffer cache, < 9us latency. But they are very expensive (online price ~$15k per switch, actual quote a little less). I've been told that "by the book" you should at least have 2 internal switches for SAN, and 2 switches for LAN/WAN (each with a redundant). I know some of the pros and cons of each approach. What I'm wondering is, would it be more cost effective to disjoin the SAN from LAN with less expensive switches? The answer to this question highlights what I should be looking for in a switch for the SAN. What should I be looking for in a LAN/WAN switch, in comparison to the SAN? With the above linked question for the SAN: How is buffer latency measured? When you see 36 MB of buffer cache, is that shared or per port? So 36 MB would be 768kb or 36MB per port? With 3 to 6 servers how much buffer cache do you really need? What else should I be looking at? Our application will be heavily using HTML5 websockets (high number of persistent connections). The amount of data being sent is small; Data sent between client <- server isn't broadcasted (not a chat/IM service). We will be doing some database reporting too (csv export, sums, some joins). We are a small business and on a budget. We'd probably only be able to spend no more than $20k on switches total (2 or 4).

    Read the article

  • Best way to build / implement a corporate developer Linux distro with multiple kernels?

    - by Garen
    At work we have Linux users who understandably prefer using Ubuntu. Problem is, we also have developer tools that only work with 'officially' supported Linux distributions that use much older 2.6.18 based kernels. (And even if they worked with newer ones, the vendors could always say they won't "support" the software unless it's on one of their 'officially' supported platforms.) We could of course just tell them to use CentOS or something else 2.6.18-based, and I'm sure their response would be something like: "you can take Ubuntu from our cold, dead hands." :) Which brings to me some questions--is there any good/easy/recommended way to run something like Ubuntu as a host VM and Centos 5.x as a guest OS (with which system--Xen,KVM,VMWare, ...?), and then roll that into our own custom internal distribution that could be easily installed? KVM looks like a good high-performance option just recently included in RHEL 5.4, but if hardware support for virtualization like Intel-VT or AMD-V is necessary, then I'd guess only those folks with fairly new PCs will be able to do it. Would be very interested to hear how anyone else has addressed this kind issue. EDIT: The target audience / users of this kind of system would be developers, each one needs to run locally licensed commercial software, so building out some separate beefy central machines isn't an option unfortunately due to license restrictions. Even if that weren't the case, a couple developers could quickly eat up the resources with parallel builds. :) Ideally, I was hoping there was some step-by-step guide out there to build your own pre-built distribution that had e.g. CentOS 5.x and Ubuntu Desktop as a guest.

    Read the article

  • How to convert JPEG JFIF files to JPEG Exif format?

    - by tigrou
    I recently put the SD card of my camera in a Windows 7 PC and start browsing pictures on it. I noticed some were not aligned correctly and use rotate feature included in Windows Photo Viewer in order to view them as I wanted. What I didn't know is that when rotate feature is used, it also overwrite the picture when pressing next or previous button resulting in a possible loss of quality (which is in my opinion a bad idea, app should at least warn user of what will happened when using such a feature). After that, I re-inserted the SD card back in my camera and bad surprise happened : the rotated picture could not be previewed anymore. Instead, i got a black screen saying "Incompatible JPEG format". Other files (untouched) are still working ok. To try to understand what happened I opened a JPEG file from camera and one generated on windows 7 in a hex editor. Here is the difference : The camera JPEG files have a Exif tag in them (with 0xE1 in header). Other JPEG files (Windows 7) have first a JFIF tag in it, followed by a Exif tag (with 0xE0 in header). So if i understand it well, both are JPEG files, but using a different internal format. Here is my question : is it possible (using some tool) to convert JFIF files to Exif format ? I understand that original camera files have been reencoded and thus lose some quality (getting originals back is impossible). What i want know if convert them from JFIF back to Exif (without a second loss of quality if possible...)

    Read the article

  • AFP/SSH stopped working on OS X Server

    - by churnd
    I have 3 Mac OS X servers all bound to AD, all configured in the Golden Triangle setup. All 3 are completely separate from each other in terms of services, but all reside on the same internal network and are all bound to the same Active Directory domain. Two are 10.5.x (latest updates) and one is 10.6.3. Last weekend, all 3 simultaneously stopped allowing Active Directory users access to certain services, specifically AFP & SSH. SMB still works fine on all 3. I asked the AD admin if anything changed, and he said "Yes, we made a change to user accounts to toughen up security", and suggested I use [email protected] instead of just username. This still didn't work. I have completely removed one of my servers from AD, and re-joined, but this didn't work either. I can do kinit from command line and get a Kerberos ticket. sudo klist -ke shows all services are configured to use the correct Kerberos principles. I have been scavenging the logs for any useful info. The AFP log just shows that I'm connecting and disconnecting. The DirectoryService.log shows stuff about misconfigured Kerberos hashes, but my research is showing that's not uncommon. /var/log/system.log isn't showing anything useful that I can see. I'm not sure where to go from here. Any help/ideas appreciated.

    Read the article

  • Router reporting failed admin login attempts from home server

    - by jeffora
    I recently noticed in the logs of my home router that it relatively regularly lists the following entry: [admin login failure] from source 192.168.0.160, Monday, June 20,2011 18:13:25 192.168.0.160 is the internal address of my home server, running Windows Home Server 2011. Is there anyway I can find out what specifically is trying to login to the router? Or is there some explanation for this behaviour? (not sure if this belongs here or on superuser...) [Update] I've run both Wireshark and netmon for a while on my home server. Wireshark captured the traffic, but didn't really show anything useful (or nothing I could make use of). A simple HTTP GET request is sent from the server (192.168.0.160) to the router (192.168.0.1), from a seemingly random port (I've seen examples from 50068, 52883), and it appears to do it twice in quick succession (incrementing port by 1), about every hour. Running netstat around the time of the failure didn't show anything (probably too long after anyway). I tried using netmon as it categorises by process, so I thought it might show a corresponding process for the port. Unfortunately, this comes in under the 'unknown' category, meaning it's basically just a slower, less useful Wireshark. I know there's not much to go on here, but does this help in anyway?

    Read the article

  • plesk: how to configure reverse proxy rules properly?

    - by rvdb
    I'm trying to configure reverse proxy rules in vhost.conf. I have Apache-2.2.8 on Ubuntu-8.04, monitored by Plesk-10.4.4. What I'm trying to achieve is defining a reverse proxy rule that defers all traffic to -say- http://mydomain/tomcat/ to the Tomcat server running on port 8080. I have mod_rewrite and mod_proxy loaded in Apache. As far as I understand mod_proxy docs, entering following rules in /var/www/vhosts/mydomain/conf/vhost.conf should work: <Proxy *> Order deny,allow Allow from all </Proxy> ProxyRequests off RewriteRule ^/tomcat/(.*)$ http://mydomain:8080/$1 [P] Yet, I am getting a HTTP 500: internal server error when requesting above URL. (Note: I decided to use a rewrite rule in order to at least get some information logged.) I have made mod_rewrite log extensively, and find following entries in the logs [note: due to a limitation of max. 2 URLs in posts of new users, I have modified all following URLs so that they only contain 1 slash after http:. In case you're suspecting typos: this was done on purpose): 81.241.230.23 - - [19/Mar/2012:16:42:59 +0100] [mydomain/sid#b06ab8][rid#1024af8/initial] (2) init rewrite engine with requested uri /tomcat/testApp/ 81.241.230.23 - - [19/Mar/2012:16:42:59 +0100] [mydomain/sid#b06ab8][rid#1024af8/initial] (3) applying pattern '^/tomcat/(.*)$' to uri '/tomcat/testApp/' 81.241.230.23 - - [19/Mar/2012:16:42:59 +0100] [mydomain/sid#b06ab8][rid#1024af8/initial] (2) rewrite '/tomcat/testApp/' - 'http:/mydomain:8080/testApp/' 81.241.230.23 - - [19/Mar/2012:16:42:59 +0100] [mydomain/sid#b06ab8][rid#1024af8/initial] (2) forcing proxy-throughput with http:/mydomain:8080/testApp/ 81.241.230.23 - - [19/Mar/2012:16:42:59 +0100] [mydomain/sid#b06ab8][rid#1024af8/initial] (1) go-ahead with proxy request proxy:http:/mydomain:8080/testApp/ [OK] This suggests that the rewrite and proxy part is processed ok; still the proxied request produces a 500 error. Yet: Addressing the testApp directly via http:/mydomain:8080/testApp does work. The same setup does work on my local computer. Is there something else (Plesk-related, perhaps?) I should configure? Many thanks for any pointers! Ron

    Read the article

  • Windows 7, going crazy with environment variables

    - by roymustang86
    So, I am trying to learn java. I installed the JDK and proceeded to write a few programs. Each time, I have to give the path to javac.exe to compile the .java file. SO, I decided to tweak the %PATH% variable. And no matter what I change it to, it doesn't work. when I do an echo %PATH%, I get 'Program' is not recognized as an internal or external command, operable program or batch file. This is my Path variable contents : C:\app\product\11.1.0\client_1\bin;%CommonProgramFiles%\Microsoft Shared\Windows Live;%SystemRoot%\system32;%SystemRoot%;%SystemRoot%\System32\Wbem;%SYSTEMROOT%\System32\WindowsPowerShell\v1.0\;"C:\Program Files (x86)\Common Files\Roxio Shared\DLLShared\";"C:\Program Files\Broadcom\Broadcom 802.11";"C:\Program Files (x86)\Common Files\Roxio Shared\OEM\DLLShared\";"C:\Program Files (x86)\Common Files\Roxio Shared\OEM\DLLShared\";"C:\Program Files (x86)\Common Files\Roxio Shared\OEM\12.0\DLLShared\";"C:\Program Files (x86)\Roxio\OEM\AudioCore\";"C:\Program Files (x86)\Intel\Services\IPT\" How do I work around this? the double quotes were not there before, I added it thinking the space was the problem.

    Read the article

  • fwbuilder/iptables manually scripted + autogenerated rules at startup?

    - by Jakobud
    Fedora 11 Our previous IT-guy setup iptable rules on our firewall in a way that is confusing me and he didn't document any of it. I was hoping someone could help me make some sense of it. The iptables service is obviously starting at startup, but the /etc/sysconfig/iptables file was untouched (default values). I found in /etc/rc.local he was doing this: # We have multiple ISP connections on our network. # The following is about 50+ rules to route incoming and outgoing # information. For example, certain internal hosts are specified here # to use ISP A connection while everyone else on the network uses # ISP B connection when access the internet. ip rule add from 99.99.99.99 table Whatever_0 ip rule add from 99.99.99.98 table Whatever_0 ip rule add from 99.99.99.97 table Whatever_0 ip rule add from 99.99.99.96 table Whatever_0 ip rule add from 99.99.99.95 table Whatever_0 ip rule add from 192.168.1.103 table ISB_A ip rule add from 192.168.1.105 table ISB_A ip route add 192.168.0.0/24 dev eth0 table ISB_B # etc... and then near the end of the file, AFTER all the ip rules he just declared, he has this: /root/fw/firewall-rules.fw He's executing the firewall rules file that was auto-generated by fwbuilder. Some questions Why is he declaring all these ip rules in rc.local instead of declaring them in fwbuilder like all the other rules? Any advantage or necessity to this? Or is this just a poorly organized way to implement firewall rules? Why is he declaring ip rules BEFORE executing the fwbuilder script? I would assume that one of the first things the fwbuilder script does it get rid of any existing rules before declaring all the new ones. Am I wrong about this? If that was the case, the fwbuilder script would basically just delete all the ip rules that were defined in rc.local. Does this make any sense? Why is he executing all this stuff at startup in rc.local instead of just using iptables-save to keep the firewall settings at /etc/sysconfig/iptables that will get implemented at runtime?

    Read the article

< Previous Page | 356 357 358 359 360 361 362 363 364 365 366 367  | Next Page >