Search Results

Search found 20021 results on 801 pages for 'software engeneering learner'.

Page 693/801 | < Previous Page | 689 690 691 692 693 694 695 696 697 698 699 700  | Next Page >

  • Cisco ASA 5510 Time of Day Based Policing

    - by minamhere
    I have a Cisco ASA 5510 setup at a boarding school. I determined that many (most?) of the students were downloading files, watching movies, etc, during the day and this was causing the academic side of our network to suffer. The students should not even be in their rooms during the day, so I configured the ASA to police their network segment and limit their outbound bandwidth. This resolved all of our academic issues, and everyone was happy. Except the resident students. I have been asked to change/remove the policing policy at the end of the day, to allow the residents access to the unused bandwidth at night. There's no reason to let bandwidth sit unused at night just because it would be abused during the day. Is there a way to setup Time of Day based Policies on the ASA? Ideally I'd like to be able to open up the network at night and all day during weekends. If I can't set Time based policies, is is possible to schedule the ASA to load a set of commands at a specific time? I suppose I could just setup a scheduled task on one of our servers to log in and make the changes with a simple script, but this seems like a hack, and I'm hoping there is a better or more standard way to accomplish this. Thanks. Edit: If there is a totally different solution that would accomplish a similar goal, I'd be interested in that as well. Free/Cheap would be ideal, but if a separate internet connection is my only other option, it might be worth fighting for money for hardware or software to do this better or more efficiently.

    Read the article

  • Setup: Eclipse in Ubuntu with Apache2 and Subversion

    - by Ricalsin
    Trying to setup Eclispe. I am running ubuntu 10.10 (Maverick). Apache2.2.16 Subversion 1.6.12 The Eclipse help/about/installed software says: Eclipse Platform 3.5.2 Subclipse 1.0.0 Version Control with Subversion 1.1.1 The Subclips wiki I followed is here I have installed the libsvn-java app as discussed. I added the line "-Djava.library.path=/usr/lib/jni" to the eclipse.ini file I checked the Eclipse help/about/confirguration settings and both of these lines are listed: eclipse.vmargs=-Djava.library.path=/usr/lib/jni java.library.path=/usr/lib/jni I checked that those files are in those directories. Still, when I check the preferencesteamsvn an error dialog shows: Failed to load JavaHL Library. These are the errors that were encountered: no libsvnjavahl.1 in java.library.path Incompatible JavaHL library loaded 1.3.x or later required I followed the "Testing JavaHL libraries" troubleshooting section at the bottom of the wiki: I downloaded the tarbal and ran it in a folder on my desktop with no problems. Then, I followed the instructions and placed that file INSIDE the path (usr/lib/jni/testJavaHL) and ran it from there. There are 50 tests performed and each one of them came back with this same error (posting only one for brevity): 50) testCommitRevprops(org.tigris.subversion.javahl.BasicTests)java.io.FileNotFoundException: /usr/lib/jni/testJavaHL/local_tmp/greek_files/iota (No such file or directory) at java.io.FileOutputStream.open(Native Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:209) at java.io.FileOutputStream.<init>(FileOutputStream.java:160) at org.tigris.subversion.javahl.WC.materialize(WC.java:70) at org.tigris.subversion.javahl.SVNTests.buildGreekFiles(SVNTests.java:303) at org.tigris.subversion.javahl.SVNTests.setUp(SVNTests.java:222) at org.tigris.subversion.javahl.RunTests.main(RunTests.java:111) FAILURES!!! Tests run: 50, Failures: 0, Errors: 50 Any ideas as to how/why the "local_tmp/greek_files/iota" is appended to the directory? I assume that's my problem.. I'm also having a problem with newrepository location = ...as the directory location of my svn repository is one level above the home directory - which is prepended to whatever I place in the dialog box - resulting in this error: svn: '/home/ricalsin/file:/home/svn' does not exist Thank you for any help.

    Read the article

  • Errors in Homebrew on OS X Lion

    - by Bo Tian
    I just ran the Homebrew script as described in the installation page. I then ran brew doctor in Terminal, and it returned several errors. I'm not sure how to fix those errors, please help. brew doctor Error: Some directories in /usr/local/share/man aren't writable. This can happen if you "sudo make install" software that isn't managed by Homebrew. If a brew tries to add locale information to one of these directories, then the install will fail during the link step. You should probably `chown` them: /usr/local/share/man/de /usr/local/share/man/de/man1 Error: You have Xcode 4.2, which is outdated. Please install Xcode 4.3. Error: Unbrewed dylibs were found in /usr/local/lib. If you didn't put them there on purpose they could cause problems when building Homebrew formulae, and may need to be deleted. Unexpected dylibs: /usr/local/lib/libcdt.5.dylib /usr/local/lib/libcgraph.6.dylib /usr/local/lib/libgraph.5.dylib /usr/local/lib/libgvc.6.dylib /usr/local/lib/libgvpr.2.dylib /usr/local/lib/libpathplan.4.dylib /usr/local/lib/libxdot.4.dylib Error: Unbrewed .pc files were found in /usr/local/lib/pkgconfig. If you didn't put them there on purpose they could cause problems when building Homebrew formulae, and may need to be deleted. Unexpected .pc files: /usr/local/lib/pkgconfig/libcdt.pc /usr/local/lib/pkgconfig/libcgraph.pc /usr/local/lib/pkgconfig/libgraph.pc /usr/local/lib/pkgconfig/libgvc.pc /usr/local/lib/pkgconfig/libgvpr.pc /usr/local/lib/pkgconfig/libpathplan.pc /usr/local/lib/pkgconfig/libxdot.pc Error: /usr/bin occurs before /usr/local/bin This means that system-provided programs will be used instead of those provided by Homebrew. The following tools exist at both paths: 2to3 Consider amending your PATH so that /usr/local/bin is ahead of /usr/bin in your PATH.

    Read the article

  • How come my Intel 520 180GB SSD performs extremely poorly?

    - by Willem
    I recently installed a new Intel 520 series 180GB SSD in my brand new MacBook Pro. The system is as follows: Model: MacBook Pro 15-inch, Late 2011 (MacBookPro8,2) Processor: 2.4 GHz Intel Core i7 Memory: 16 GB 1333 MHz DDR3 Graphics: AMD Radeon HD 6770M 1024 MB Software: Mac OS X Lion 10.7.3 Main Drive Bay: Intel 520-series 180GB SATA-3 (6GB/s negotiated link) SSD (Firmware: 400i) [80GB free] Optical Bay: Toshiba 5400 RPM 750GB SATA-2 HDD Trim: Enabled (according to Trim Enabler App) And here are the speeds I'm getting: Read: 412 MB/s Write: 186 MB/s What have I done wrong? Results expected: Read/write both 500MB/s I have seen benchmarks with lesser SSD:s (SATA-2 even) outperform my write-speeds by far. Also, Intel 520 SSD:s are supposed to be the top class of SSD:s. Trim Enabler report: This looks a bit odd compared to screenshots from their site: These is the defined S.M.A.R.T attributes (taken from Intel): And here are my S.M.A.R.T attributes read using smartctl tool from smartmontools: They don't seem very compatible. I'm going to try and look for a S.M.A.R.T attributes reader tool for OS X which might support Intel 520 series.

    Read the article

  • Ubuntu karmic, chroot, Ubuntu server 8.04 LTS and Plesk

    - by morpheous
    I am pretty new to the Linux environment, and although I have been using Ubuntu Karmic desktop for development work, I have always prefered the GUI tools and very rarely use the terminal. I am about to launch a site (a VPS) which will be running Ubuntu server 8.04 LTS and Plesk. I want to install both the server (8.04 LTS) and Plesk in a 'chroot' on my Karmic desktop. I also want to test install some packages I have written for the server, and generally familiarize myself with the environment before I signup to the hosting service. I will be running a streamline server (no GUI), with only the following software: PHP Python mySQL PostgreSQL Apache Plesk Third party packages I developed In terms of mail, I think the hosting provider provides a mail service, so I dont know if I will need to run my own mail server. I will like some advice on the following: How can I install Ubuntu 8.0.4 server LTS in a chroot? How can I install Plesk in Ubuntu 8.0.4 server LTS (from the command line) Can anyone recommend how I back up the remote server when I go live (i.e. what tools to use)?. I will need to back up both databases, as well as some config data and installation packages

    Read the article

  • What are the biggest, best CPUs that support Physical Address Extension?

    - by Giffyguy
    I'm looking for a CPU that will support PAE and fit into an LGA775 socket. This combination of technology is very much preferred for my current server hardware/software setup. My priorities in order of highest to lowest: PAE & LGA775 At least 1066Mhz FSB Largest CPU cache possible Multiple Cores if possible HyperThreading if possible Most other factors are of little-to-no consequence. I'm finding it very difficult to figure out what my options are. Intel doesn't have much useful information on PAE (since x64 is so dominant), and Wikipedia simply says that "PAE is provided by Intel Pentium Pro (and above) CPUs - including all later Pentium-series processors except the 400 MHz bus versions of the Pentium M." All of Intel's listed Pentium CPU's support Intel64, which makes me seriously doubt they will support PAE with a 32-bit OS. And Wikipedia's claim is so vague, I have no idea if they mean up-to-and-including the x64 Prescott CPUs. PAE is supposed to be an aspect of the x86 architecture, and I believe it is no longer supported in an x64 environment. Please correct me if I am wrong.

    Read the article

  • Sandra reports my CPU as "Engineering Sample", how can I be sure this is correct?

    - by stevenvh
    I ran SiSoftware's Sandra on my new PC, and for my CPU it reports: Generation : G8 / T29 Name : TN0 (Trinity) FX/Opteron 32nm (ES) Revision/Stepping : 0 : 10 / 1 Stepping Mask : TN-A1 Microcode : MU6F10010F The (ES) is a well-known code in product development, meaning "Engineering Sample". Those are beta versions of the CPU, which still may contain some bugs, or even have features switches off. I contacted both the PC's manufacturer Medion as well as AMD about this. I had to downvote the Medion helpdesk here. The person I talked to boldly said Sandra was wrong (without knowing how Sandra got this information; he didn't even know the software), and used the word "impossible". His conclusion was "We’re not taking this in consideration for service”. Right. So, if you like Medion for their good prices, but like good support even better, you may consider buying your PC elsewhere. AMD was more helpful, but wanted to be sure before replacing the part (which I find reasonable). They suggested that I dismount the cooler from the CPU to check what was printed on it to be sure. I'm a bit reluctant here: I would have to wipe the thermal paste from the CPU, and won't know for sure my cooling will still be OK afterwards. Questions Has anybody actually found a confirmed ES CPU in her PC? Is anybody aware of Sandra erroneously reporting CPUs as Engineering Samples? How can you tell an ES, apart from the print on the package? Shouldn't Stepping Mask identify the CPU uniquely?

    Read the article

  • SSH works in putty but not terminal

    - by Ryan Naddy
    When I try to ssh this in a terminal: ssh [email protected] I get the following error: Connection closed by 69.163.227.82 When I use putty, I am able to connect to the server. Why is this happening, and how can I get this to work in a terminal? ssh -v [email protected] OpenSSH_6.0p1 (CentrifyDC build 5.1.0-472) (CentrifyDC build 5.1.0-472), OpenSSL 0.9.8w 23 Apr 2012 debug1: Reading configuration data /etc/centrifydc/ssh/ssh_config debug1: /etc/centrifydc/ssh/ssh_config line 52: Applying options for * debug1: Connecting to sub.domain.com [69.163.227.82] port 22. debug1: Connection established. debug1: identity file /home/ryannaddy/.ssh/id_rsa type -1 debug1: identity file /home/ryannaddy/.ssh/id_rsa-cert type -1 debug1: identity file /home/ryannaddy/.ssh/id_dsa type -1 debug1: identity file /home/ryannaddy/.ssh/id_dsa-cert type -1 debug1: identity file /home/ryannaddy/.ssh/id_ecdsa type -1 debug1: identity file /home/ryannaddy/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-5 debug1: match: OpenSSH_5.1p1 Debian-5 pat OpenSSH_5* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.0 debug1: Miscellaneous failure Cannot resolve network address for KDC in requested realm debug1: Miscellaneous failure Cannot resolve network address for KDC in requested realm debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP Connection closed by 69.163.227.82

    Read the article

  • How does it hurt to use Linux (Ubuntu) as a guest OS for all my tasks?

    - by sauparna
    I have a machine running Windows, where the disk has two partitions C (50 GB) and D (250GB). I do research in Information Retrieval and need to work with a large corpus (more than 50 GB) and in Linux. So if I want to install Linux on the existing system, keeping the Windows installation intact, will it be fine to run it in a virtual box? (say, QEMU, VMWare, etc.) An alternative is using Wubi. In that case the Linux installation has to be on drive C. Then, if I keep a small Linux installation (say 5GB) on C, and my corpus on D (mounted in Linux), how will it affect the performance of my programs which would be accessing the mounted Windows drive D. Is it feasible to use Linux this way? Which of the above is better if at all they are a way out? Note : Since my post in July 2010, I have been using and have tried several ways of maintaining a disk-image that I can mount in Linux. I had a 100GB qcow2 disk and a 100GB raw disk, both formatted to an EXT3 file system. I was mounting and connecting to the qcow2 disk using qemu-nbd. The problem was that every now and then, the connection to the disk would get lost and the running programs would throw disk I/O errors. The raw disk would mount and work fine as a loop mounted device, but when writing data to it, the mount.ntfs program would hog the CPU and the process would take an enormous amount of time. I was in fact running make on a piece of software located on this raw disk, and after a point of time make was waiting while mount.ntfs would show 100% CPU usage.

    Read the article

  • Firebox 1250e Core Failing?

    - by Noah
    We have 2 Firebox 1250e Core firewall boxes in our production environment, serving as an active and passive mode. A few months back, the active box was flashing a warning light, so our consultant removed it, and plugged it in to a test network. Everything appeared to be working fine, so he reloaded it into the production environment, and we didn't see any other issues. Fast forward to last week, and out network was constantly dropping connections over RDC, timing out, and performing as if there was a traffic issue. I turned off the production box and everything began to work fine immediately. At this point though, I'm not sure how to proceed. Should the box be completely replaced? Is there any recommended testing we could do to determine if there is a failure of some type with this device? Should we try upgrading the software on it? I know the environment isn't the issue, since the passive box (which is now the active one) is working fine. We'd like to have 2 in production though for safety failover purposes. I am not a network admin, but am hoping someone here might be able to provide some guidance.

    Read the article

  • Collect temperature and fan speed with munin from Windows 7 PC?

    - by mfn
    Hi, I'm quite fond of munin and using it also at home to monitor my PCs. What was super-duper easy under Linux is pretty much unsolvable for me under Windows: I'd like to monitor CPU and Motherboard temperatures as well as fan speed. On Linux I'm using lm-sensors and the plugin for munin was basically there. I access already some information from my Windows machine via SNMP (disk space, CPU usage, memory usage); the graphs are simple as is the information exposed via SNMP, but they do their job. But when it comes to temperature and fan speed I'm running against a wall. My research so far resulted in that Windows does not by default provide out of the box ability to retrieve temperature/fan speed data. Third party applications are necessary which have know-how how to communicate with the Motherboard chips. The best I cam up with is that SpeedFan exposes a shared memory interface and there exists a library which hooks into Windows SNMP facility and bridges over to SpeedFans shared memory interface; it's called SFSNMP (site currently down). Unfortunately the library doesn't work, there's a bug report at SpeedFan open about it, but it's currently not moving (although the SFSNMP author is active there) . So, unless that's going to work like anytime soon, are there any alternatives? I'm not found of buying any software to get that feature, given that I take it as granted that my system exposes me the information to properly monitor it, but anyway don't just not answer because of this.

    Read the article

  • No apparent reason for high load average

    - by Oz.
    We have several web servers running on Amazon (ec2) c1.xlarge, over Amazon AMI. The servers are duplicates of each other, running the exact same hardware and software. Each server spec is: 7 GB of memory 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each) 1690 GB of instance storage 64-bit platform I/O Performance: High API name: c1.xlarge A couple of weeks ago we have run a yum upgrade on one of the servers. Starting on this upgrade the upgraded server started showing a high load average. Needless to say, we did not update the other servers and we can not do so until we understand the reason for this behavior. The strange thing is that when we compare the servers using top or iostat, we can not find the reason for the high load. Note that we have moved traffic from the "problematic" server to the others, which have made the "problematic" server less crowded in terms of requests, and still his load is higher. Do you have any idea what could it be, or where else can we check? Many thanks for the help! Oz. # # proper server # w command # 00:42:26 up 2 days, 19:54, 2 users, load average: 0.41, 0.48, 0.49 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT pts/1 82.80.137.29 00:28 14:05 0.01s 0.01s -bash pts/2 82.80.137.29 00:38 0.00s 0.02s 0.00s w # # proper server # iostat command # Linux 3.2.12-3.2.4.amzn1.x86_64 _x86_64_ (8 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 9.03 0.02 4.26 0.17 0.13 86.39 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvdap1 1.63 1.50 55.00 367236 13444008 xvdfp1 4.41 45.93 70.48 11227226 17228552 xvdfp2 2.61 2.01 59.81 491890 14620104 xvdfp3 8.16 14.47 94.23 3536522 23034376 xvdfp4 0.98 0.79 45.86 192818 11209784 # # problematic server # w command # 00:43:26 up 2 days, 21:52, 2 users, load average: 1.35, 1.10, 1.17 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT pts/0 82.80.137.29 00:28 15:04 0.02s 0.02s -bash pts/1 82.80.137.29 00:38 0.00s 0.05s 0.00s w # # problematic server # iostat command # Linux 3.2.20-1.29.6.amzn1.x86_64 _x86_64_ (8 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 7.97 0.04 3.43 0.19 0.07 88.30 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvdap1 2.10 1.49 76.54 374660 19253592 xvdfp1 5.64 40.98 85.92 10308946 21612112 xvdfp2 3.97 4.32 93.18 1087090 23439488 xvdfp3 10.87 30.30 115.14 7622474 28961720 xvdfp4 1.12 0.28 65.54 71034 16487112

    Read the article

  • Asus MyCinema U3100mini Choppy

    - by dsimcha
    I'm running an Asus MyCinema U3100mini ATSC on Windows 7 64-bit. When I play live TV in Windows Media Center, it's very choppy and uses 500+ MB of RAM, I'm guessing due to the hard drive buffering functionality. Is there any way to disable the live TV pause buffer completely? If not, can anyone recommend alternative software that: Works with the MyCinema. Is lightweight and not horribly bloated with features I'll never use like Windows Media Center is. Edits: This is a dual boot system. I've discovered that the tuner actually works fine on XP. It also works fine on my other computer, which has slower hardware and also runs Windows 7 64-bit. The problem actually seems to be with playback at large screen sizes, not with hard drive buffering. Everything works fine below a certain window size and fails for large windows or full screens. Also, the same thing seems to happen whether playing live or recorded TV. As far as the obvious stuff goes, I have the latest video drivers from ATI for my Radeon x1050.

    Read the article

  • Setting up Live @ EDU

    - by user73721
    [PROBLEM] Hello everyone. I have a small issue here. We are trying to get our exchange accounts for students only ported over from an exchange server 2003 to the Microsoft cloud services known as live @ EDU. The problem we are having is that in order to do this we need to install 2 pieces of software 1: OLSync 2: Microsoft Identity Life cycle Manager "Download the Galsync.msi here" the "Here" link takes you to a page that needs a login for an admin account for live @ EDU. That part works. However once logged in it redirects to a page that states: https://connect.microsoft.com/site185/Downloads/DownloadDetails.aspx?DownloadID=26407 Page Not Found The content that you requested cannot be found or you do not have permission to view it. If you believe you have reached this page in error, click the Help link at the top of the page to report the issue and include this ID in your e-mail: afa16bf4-3df0-437c-893a-8005f978c96c [WHAT I NEED] I need to download that file. Does anyone know of an alternative location for that installation file? I also need to obtain Identity Lifecycle Management (ILM) Server 2007, Feature Pack 1 (FP1). If anyone has any helpful information that would be fantastic! As well if anyone has completed a migration of account from a on site exchange 2003 server to the Microsoft Live @ EDU servers any general guidance would be helpful! Thanks in advance.

    Read the article

  • XP SP2 Event log not logging events

    - by Weedfreer
    I have a problem whereby a terminal appears not to be logging events correctly and occasionally appears to have problems communicating accross the network.The terminal has previously been infected with a virus which apears to have 'played' with the default group policy in the standard user profile. Although, outwardly, the terminal appears to be working normally I still have a nagging feeling that it isn't quite back to the way it was. It was infected by a user plugging in a USB Stick while the company was using the older version of the AV software...typically a week or so before it was updated.I have configured the Event logs to Overwrite as required and to be 5056KB in Maximum size. I have also attempted:- Disabling the Event Log service & restarting Renewing the EVT files in Windows\system32\config directory Restarting the event log service and restarting Clearing the event log in the Services MMC Resetting the Filters to Default in the services MMC Using the EVENTCREATE command remotely from a CMD window on the server to force an event creation event. So far the only operation to have any sort of success is the remote computer EVENTCREATE command from a CMD window on the server. As it stands, the only other time that the computer has managed to create events is while it is being restarted.Has anyone gotany ideas on how to proceed? I'm thinking that possibly a refresh of the 'Windows\system32\config\SystemProfile' folder. I'm also thinking about running a tool such as Malwarebytes but this could be slightly controvertial as the system needs to be running on 'up-time' for as long as possible. I'm also wonderign whether anyone knows of any Windows admin tools that allow me to control the event logging options or default security options so that i could get it back to some sort of standard.What I'm trying to avoid is a complte re-imaging of the terminal. Although this is an option, I dont really want to have to take it if i dont need to.Many thanks in advance for any suggestions anyone may be able to provide.

    Read the article

  • How can I change the default location/action of 'Open Outlook Data File' in Outlook 2010?

    - by Chadddada
    I have recently deployed a Remote Desktop Host server that functions as a remote Microsoft Office 2010 work space for users. In part of the locking down of this server I have installed all programs on the D: drive and, through the use of Group Policy, hidden all the drives on the server from standard users. In addition to hiding these drives I am not allowing users to save anything locally (on the server) or open Libraries. However one of the functions of the server is to provide the Outlook client. Often users will have the .PST file stored on a network location and want to open this in Outlook. Can I change the default action or location that File Open Open Outlook Data File looks or tries to pull the file from? The default location seems to be under Users / Libraries. When click 'Open' you get a warning: This operation has been cancelled due to restrictions in effect on this computer. Clicking OK drops the user into a small menu that shows attached network drives under Computer. Can I instead have the 'Open' click drop the users in a defined network drive or just open computer and allow them to select a share? I don't want them to see the error message. A solution that looks to have been used for Office 2000/03 is: Key: HKEY_CURRENT_USER\Software\Microsoft\Office\<version>\Outlook Value name: ForceOSTPath Value type: REG_EXPAND_SZ Value: path to your storage folder I am not sure if there is a better way to do this now OR if this even works with Office 2010.

    Read the article

  • Server 2012, Jumbo Frames - should I expect problems?

    - by TomTom
    Ok, this sound might stupid - but is there any negative on just enabling jumbo frames in practice? From what I understand: Any switch or ethernet adapter that sees a jumbo frame it can not handle will just drop it. TCP is not a problem as max frame size is negotiated in the setinuo phase. UCP is a theoretical problem as a server may just send a LARGE UDP packet that gets dropped on the way. Practically though, as UDP is packet based, I do not really think any software WOULD send a UDP packet larger than 1500 bytes net without app level configuration changes - at least this is how I do my programming, as it is quite hard to get a decent MTU size for that without testing yourself, so you fall back in programming to max 1500 packets. The network in question is a standard small business network - we upgraded now from a non managed 24 port switch to a 52 port switch with 4 10g ports (netgear - quite cheap) and will mov a file server to 10g for also ISCSI serving. All my equipment on the Ethernet level can handle minimum 9000 bytes and due to local firewalls I really want to get packets larger (less firewall processing), but the network is also NAT'ed to the internet. On top, different machines move around (download) large files (multi gigabyte area) quite often for processing. The question is - can I expect problems when I just enable jumbo frames? Again, this is not totally ignorance - I just don't see programs sending more than 1500 byte UDP packets (if that is a practical problem please tell me) and for TCP the MTU is negotiated anyway. if there is a problem I can move to a dedicated VLAN, but this has it's own shares of problems as basically most workstations must then be on both VLAN's.

    Read the article

  • Piecing together low-powered hardware for an RS-232 terminal server

    - by Fred
    I'm working on reconstructing my Cisco lab for training/educational purposes and I found that the actual terminal server I have is dead. I have a couple of 8-port PCI serial cards which would be more than ample for my lab, but I don't want to leave my personal computer running to be able to access the console ports. Ideally I would access the terminal server remotely, either by SSH/RDP to the box (depending on what OS I go with) or by installing a software package that allows me to telnet directly to a serial port. I know I've found a program that does this under Linux in the past but its name escapes me at the moment. I'm thinking about scavenging for some old hardware, on eBay or something, to put together a low-powered PC. Needs to be something that: Has Low-power consumption Has at least 2 PCI slots (though I certainly wouldn't complain about having more) Has onboard Ethernet (or, if not, another PCI or ISA slot (not shared)) Can be headless once an OS installed (probably Linux) I'm currently leaning towards an old fashioned Pentium (sub-133MHz era) but I am wondering if anybody else knows of another platform/mobo that would suit these needs. Alternatively, I've been considering buying a Raspberry Pi and a big USB hub along with a bunch of USB-Serial adapters but this sounds like it'd get messy quick with cables and adapters all over the place, and I may not even have the same ttyS#'s between boots.

    Read the article

  • How should I set up protection for the database against sql injection when all the php scripts are flawed?

    - by Tchalvak
    I've inherited a php web app that is very insecure, with a history of sql injection. I can't fix the scripts immediately, I rather need them to be running to have the website running, and there are too many php scripts to deal with from the php end first. I do, however, have full control over the server and the software on the server, including full control over the mysql database and it's users. Let's estimate it at something like 300 scripts overall, 40 semi-private scripts, and 20 private/secure scripts. So my question is how best to go about securing the data, with the implicit assumption that sql injection from the php side (e.g. somewhere in that list of 300 scripts) is inevitable? My first-draft plan is to create multiple tiers of different permissioned users in the mysql database. In this way I can secure the data & scripts in most need of securing first ("private/secure" category), then the second tier of database tables & scripts ("semi-private"), and finally deal with the security of the rest of the php app overall (with the result of finally securing the database tables that essentially deal with "public" information, e.g. stuff that even just viewing the homepage requires). So, 3 database users (public, semi-private, and secure), with a different user connecting for each of three different groups of scripts (the secure scripts, the semi-private scripts, and the public scripts). In this way, I can prevent all access to "secure" from "public" or from "semi-private", and to "semi-private" from "public". Are there other alternatives that I should look into? If a tiered access system is the way to go, what approaches are best?

    Read the article

  • XP Pro freezes on welcome screen

    - by Peter B
    I have a problem that sounds much the same as http://superuser.com/questions/83101/how-to-diagnose-a-freeze-on-startup-in-windows-xp About every 2nd or 3rd boot, XP-Pro freezes on the Welcome page (the one showing the user name icons). The mouse moves the cursor OK, but clicking on an icon does nothing, and neither does any keystroke. If you press too many keys there is a beep and after that the mouse won't move the cursor anymore. The work-around is always to reboot into Safe mode and request a CHKDSK /R After this, the next boot is fine. There are no related entries in the Event log when the problem occurs. The only two entries are: "Microsoft (R) Windows (R) 5.01. 2600 Service Pack 3 Multiprocessor Free." "The Event log service was started." Update 1: Many thanks for that but I found no problems with any diagnostics I have run. But, I have managed to locate, and code round, the source of the problem - which is with whatever processing goes on behind the XP Welcome (as opposed to "classic") login screen. Unfortunately, having classic login means you don't get the useful Fast User Switching (FUS) login/switch mode. So, to retain this, my fix is: Add a Windows shutdown script (using gpedit.msc) to force "classic" mode for the first logon after next XP startup, by running: reg add "hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" /v LogonType /t REG_DWORD /d 0 /f Add a Scheduled Task to run at User Logon that enables Welcome screen (and hence FUS) by running the same command with 1 instead of 0 after the /d flag. The task is run as a privileged user (who can run "reg add").

    Read the article

  • Compress a folder of PDF files into separate zip archives

    - by Panrubius
    I wanted to take a folder full of PDF files and create a number of separate zip files, after following the advice on this question everything worked *almost*perfectly. Here's what happened: When I issued this command in Terminal: zip -s 5m -r ~/Desktop/invoices ~/Desktop/Invoices/ Everything worked really well, in that I got 11 ZIP files of approximately 5 MB each; placed in the folder specified. However, the files they outputted were named as follows: invoices.z01 invoices.z02 invoices.z03 invoices.z04 invoices.z05 invoices.z06 invoices.z07 invoices.z08 invoices.z09 invoices.z10 invoices.zip So as you can see only invoices.zip has been named correctly. I could go through and rename them one by one, but seriously, if we start doing that then what in the name of Evolution are computers for?! Now, I am also aware that I'm relatively new to the Terminal; so I could be making a very silly mistake somewhere. If that's the case, please be patient :-) Any help would be greatly appreciated. One last note: I'm quadriplegic so I would like to avoid GUI applications as much as possible, I use voice recognition software you see this working in the Terminal is much much easier.

    Read the article

  • How to Eliminate Black Bars from Powerpoint videos ?

    - by appu
    Hi, I am running a digital signage system for my client. The basic installation is a vertically oriented 42" LCD TV with a 1920x1080 screen resolution (reverse of 1920x1080 when setup normally, i.e. landscaped). Please check out the following link for a basic screen divisions layout I want to setup. http://flickr.com/photos/55097319@N03/5410208856 In the division labeled "ppt" I plan to run a powerpoint presentation. The screen division is 360x1476 resolution. As there isn't an option in powerpoint to specify slide size in terms of resolution so according to this article on indezine http://indezine.com/products/powerpoint/books/perfectmedicalpres02.html to get a screen resolution of my preference I divided 360 and 1476 each by 72 which gives me 5"x 20.5" as the slide size for my ppt. After setting up slide size as per above dimensions, I used sizer (http://brianapps.net) to resize my ppt window to 360x1476 so that when I record I do not get any black bars. But after launching recording there are side black bars visible which camtasia records and brings-inside cam studio with black bars. http://www.flickr.com/photos/55097319@N03/5409597049 My question is that after doing the above and as explained in the following video link why do I still get black bars. http://feedback.techsmith.com/techsmith/topics/eliminate_black_bars_in_your_powerpoint_recordings Is there an option in camtasia to stretch the recording to cover up black bars or any other alternative way I can get rid of those black bars while I get a ppt recorded as per my preferred dimensions. Notes: In the techsmith video above it asks to adjust my desktop screen resolution which my display chipset does not allow me to set. I set up show in powerpoint to be "browsed by an individual(window)". Also signage software only supports swf's and video file formats natively and not ppt, pptx etc. Thanks bhavani.

    Read the article

  • Resize Win2003 system+boot partitions to bigger disks & different controller?

    - by ane
    Have an old Win2003 server with 1 SCSI hard drive partitioned as follows: D: boot (includes D:\ntldr, boot.ini, etc.) C: system (includes C:\WINDOWS) Want to move the whole system to new hardware with bigger drives and different controllers. Specifically, C: to a 300GB SAS drive, and D: to a 2TB SATA drive. Tried: VMWare Converter - VMWare Server - Diskpart Result: Diskpart refuses to resize system or boot disks VMWare Converter - VMWare Server - GParted Result: Will not boot (see http://serverfault.com/questions/219868/resize-ntfs-system-partitions-with-gparted ) Attach original VMWare disk to a duplicate VMWare install - Diskpart Result: Will not boot (goes to Directory Services Restore mode) Backup Exec System Recovery Server Edition 2010 with Restore Anywhere (tried restoring both to VMWare and to the bare system, without VMWare) Result: Windows Boot error: Could not read from the selected boot disk. Check boot path. Supposedly this is a boot.ini problem, so I try bootcfg /rebuild from the recovery console. Says it can't find windows partition so it can't rebuild. Thought about Ghost but it's completely different hardware/controllers that we're restoring to, so I doubt it would boot. Reinstalling Windows from scratch is not an option due to critical custom software heavily embedded on the original machine. Has anyone been in a similar situation (with unusual boot/system partitions) before and figured out how to resize onto different disks?

    Read the article

  • linux keeps disconnecting from wireless network

    - by Matteo Ceccarello
    I'm running Arch Linux on an Acer laptop and my wirless connection doesn't stay up. After a while it disconnects, and when I try to reconnect I get stuck with a "Waiting for authorization" message. I have to retry several times before getting the connection stay up for few minutes. This happens with both networkmanager and wicd. The strange thing is that the iMac that sits next to the laptop connects fine, and when I use my laptop within the university wireless network it works normally. How can I solve this problem? EDIT: I've tried to connect manually following the steps iwlist wlan0 scan wpa_supplicant -i wlan0 -c /etc/wpa_supplicant.conf dhcpcd wlan0 and it works, I can ping google. However, looking to wpa supplicant output I see that it keeps connecting and disconnecting. I'm using WPA2, and this seems to be a problem in authentication. EDIT 2: as pointed out in the answers I forgot to mention my hardware/software specifications: kernel: Linux 3.0-ARCH wireless card: # lspci | grep -i net 07:00.0 Network controller: Intel Corporation WiFi Link 5100 module used # lsmod | grep -i 80211 mac80211 216021 1 iwlagn I use a Netgear DGN1000 modem/router My dmseg output is shown here http://pastebin.com/8Tf7iage

    Read the article

  • Managing an application across multiple servers, or PXE vs cfEngine/Chef/Puppet

    - by matt
    We have an application that is running on a few (5 or so and will grow) boxes. The hardware is identical in all the machines, and ideally the software would be as well. I have been managing them by hand up until now, and don't want to anymore (static ip addresses, disabling all necessary services, installing required packages...) . Can anyone balance the pros and cons of the following options, or suggest something more intelligent? 1: Individually install centos on all the boxes and manage the configs with chef/cfengine/puppet. This would be good, as I have wanted an excuse to learn to use one of applications, but I don't know if this is actually the best solution. 2: Make one box perfect and image it. Serve the image over PXE and whenever I want to make modifications, I can just reboot the boxes from a new image. How do cluster guys normally handle things like having mac addresses in the /etc/sysconfig/network-scripts/ifcfg* files? We use infiniband as well, and it also refuses to start if the hwaddr is wrong. Can these be correctly generated at boot? I'm leaning towards the PXE solution, but I think monitoring with munin or nagios will be a little more complicated with this. Anyone have experience with this type of problem? All the servers have SSDs in them and are fast and powerful. Thanks, matt.

    Read the article

< Previous Page | 689 690 691 692 693 694 695 696 697 698 699 700  | Next Page >