Search Results

Search found 34337 results on 1374 pages for 'build machine'.

Page 258/1374 | < Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >

  • Torrent, ISA Server 2006 and packet dropped due to TCP_NOT_SYNC

    - by Pascal
    Hi, I'm trying to get uTorrent 2.0.4 to work in a DMZ machine, protected by a ISA Server 2006. I've opened 1 inbound port (via publishing) and opened all the higher ports for that specific machine that runs uTorrent on my DMZ, and it's working almost fine. The problem is that I keep getting packets dropped with 0xc0040017 FWX_E_TCP_NOT_SYN_PACKET_DROPPED. Is there any way to disable this via registry? Is there any way around this? The download speed fluctuates a lot, and when I starts hitting the upper limit that I've defined in uTorrent, the errors start poping up a lot, and the download speed goes way down, and the process repeats on and on Tks Edit My outbound rules are: Port Range: TCP 10000-65535 Outbound Port Range: UDP 10000-65535 Send Edit It's probably a bug handling requests from Windows 7. When I installed the uTorrent on a XP machine, the problem went away

    Read the article

  • Assign PowerShell script to run at startup using PowerShell on Window Server 2012

    - by James Toyer
    I'm trying to write a PowerShell script that will run when a Windows 2012 instance is created on AWS using the configuration tools provided by AWS. My problem is that I want to change the name of the machine once it has started up, restart the machine and carry on set up process after. The main reason for this is that one of the applications, Boundary, installed in the set up process takes the name of the server when first installed. It is then doesn't seem possible to change it's name in their portal. Ideally I would have two PowerShell scripts, one to start the set up process, initialised through AWS and another that runs the first time the machine restarts. This second script would ideally be queued to run on the next start by the initial set up script. So I guess my question are: Is this possible? How would I go about doing this. My Google foo is letting me down here so any answers would be appreciated.

    Read the article

  • Streaming media from linux server - low footprint is crucial

    - by Mike Haye
    I recently pre-ordered the Raspberry Pi. http://www.raspberrypi.org/faqs For those of you who don't know it, it's a machine with 256 mb ram and a 700 MHz processor for $35. I plan to run linux on an SD card on this machine and have it act as both a htpc, VPN and media server. In regard to the media server part, I need to find some linux software that has a small footprint, but allows me to stream media to other devices connected to the internet (preferably without having to install any additional software on the client machines) Also, I would love if the video could be compressed, so the data usage wouldn't be so big for the client machine (e.g. when I'm using my data plan on my smartphone ;) ) Thanks in advance for any answers :) Mike.

    Read the article

  • Synergy client drops and reconnects at UAC dialog

    - by sidran32
    I've been using Synergy for a while at work to connect my XP machine (the host) to my Win 7 laptop (the client). I previously was having issues with using Synergy and the UAC prompt, as described in this question, and have had no issues since, until recently. I upgraded to Synergy 1.4.10 and now am seeing odd behavior whenever a UAC prompt appears on my laptop. When the UAC prompt appears on my laptop, Synergy momentarily drops its connection to my laptop, causing my mouse and keyboard focus to revert to my host machine (the XP machine). After about a second or so, though, the connection gets re-established and I am able to type and use the mouse buttons in the UAC prompt. Once the prompt clears, the connection drops again for a second, and then gets re-established again. Is this something to do with configuration or perhaps should I just chalk it up to a change in behavior in version 1.4.10?

    Read the article

  • MaaS minimum requirements with juju-jitsu?

    - by Christopher Shen Mu Long
    I've browsed through so many different sites and found so much contradictory information. As I am getting tired of this and do belive this question affects many other users, so I would like to collect the "once and for all times" answer. Unfortunately, the documentation on MaaS and Juju is ... well, not the best, sorry to say that. What are the minimum system requirements for setting up a MaaS cluster, which is going to be orchestrated with juju-jitsu? Do they need to have the exact system specifications or can I just combine different hardware? What are the minimum requirements for the master machine? E.g. "You need at least 8GB of RAM, a dual core CPU with at least 3.0 GHz." How many machines to I need to deploy MaaS on? I've read six machines, nine machines, and so on. I clearly want to know: "You need one for the Master and e.g. five nodes." Do I need to attach as many NICs (network interface cards) to my master machine as there are nodes, or can I simply attach two NICs and a switch? One NIC for connecting to the internet, one for handling the MaaS tasks, connected to a switch, which connects my nodes to the master? Is Juju now ready for local deployment? The last time I experimented with juju and had to reboot my machine, the services orchestrated by juju were gone. This was an issue I also found on the official juju site. Unfortunately, as mentioned above, the documentation is not the best, so I could not find the necessary info on that again. So: Can I use juju on a local environment or will a reboot break my setup?

    Read the article

  • Packet loss with all adapters on one PC only on the LAN

    - by Enigmativity
    I have a Windows 7 64-bit machine that is losing up to 20% of IP packets on both adapters - wireless & LAN. Browser traffic appears to be affected the most, but it is happening to all protocols. All other computers on the network are functioning fine. If I ping from my faulty machine to any machine on the LAN (wired or wirelessly), including the router/gateway and internet sites, I get up to 20% packet loss. If I do the following commands: ipconfig /release ipconfig /renew then I sometimes get my network performance back for a matter of a few seconds to less than a couple of minutes. Rebooting also works for a short period of time. This problem has been occurring for a couple of months and is getting worse. The computer used to work just fine. I updated the wireless adapter firmware the other day with no effect. Does anyone know what is happening?

    Read the article

  • low performance on HPC cluster (sge) when running multiple jobs

    - by Yotam
    O know this is a long-shot but I'm clueless here. I'm running several computer simulations on High Performance Computation cluster (HPC) of oracale grid engine (sge). A single job runs at a certain speed (roughly 80 steps per second) when I add jobs to the machine, at a certain treshhold, the speed is recuded by two. On one machine (I don't know the cpu kind) the treshold is 11 jobs for 16 cpu's. On another one with the same number and kind of cpu's , the treshold is 8. I thought at first that this is a memory issue but each job takes about 60MB - 100MB and I have 16GB of ram on each of those machine. Did any of you encountered such a problem? is there any way to analyz this? Thanks.

    Read the article

  • Should I disable write caching on my Windows 2008 VM?

    - by javano
    I have a Windows Server 2008 x64 Standard virtual machine that runs on a machine with a hardware RAID controller, a Perc 6/i, which has a battery on-board. Doing everything I can for additional performance, I think I should disable this. Is this very dangerous though? My understand is that Battery Backed Write Caching gives a performance boost to the host OS, telling it the write was complete when they are still sitting in flash waiting to be written. However, I can't see how it would be detrimental to performance, but is there a gain (even if marginal) to enabling it / disabling it? P.s. There machine has a backup power. Here is a screen shot for clarification:

    Read the article

  • FreeBSD Ports: How can I see all dependencies for a port, and all subdependencies for those dependencies?

    - by Stefan Lasiewski
    I'm trying to build a port which depends on apache-ant. I thought I could run make build-depends-list to see all dependencies required by this port: # make build-depends-list /usr/ports/devel/apache-ant /usr/ports/java/jdk16 /usr/ports/math/gmp But after installing everything, the port had a dependency list which was a mile long: apache-ant-1.8.1 desktop-file-utils-0.15_2 gamin-0.1.10_4 gettext-0.18.1.1 gio-fam-backend-2.26.1 glib-2.26.1_1 gmp-5.0.1 inputproto-2.0 javavmwrapper-2.3.5 kbproto-1.0.4 libX11-1.3.3_1,1 libXau-1.0.5 libXdmcp-1.0.3 libXext-1.1.1,1 libXi-1.3,1 libXtst-1.1.0 libiconv-1.13.1_1 libpthread-stubs-0.3_3 libxcb-1.7 pcre-8.12 perl-5.10.1_3 pkg-config-0.25_1 python26-2.6.6 recordproto-1.14 unzip-6.0 xextproto-7.1.1 xproto How can I see all dependencies, and all subdependencies for a port?

    Read the article

  • apache server not working after installing zend server

    - by kamal
    i have apache installed in my redhat 5.3 server machine. and i was trying to install zend server. i installed zend server with install.sh file in directory /var/zend. in my windows machine after installling zend server community edition i was able to access apache server as well as zend server. but in my linux machine localhost displays nothing and localhost:10081 shows zend server. what can i do to run my localhost? or should i seperately install apache?

    Read the article

  • MAAS/JuJu Clarifications

    - by ectoskeleton
    I really love the concept of MAAS underlying an OpenStack implementation, but there are a few questions about MAAS that I am not entirely clear on. Should all hosts be set to network boot at all times or after they have been registered and allocated as a service, should they boot to disk? After juju bootstrap is executed, I turn on the machine that has been allocated (note WoL isn't working, I suspect it's being blocked on the network), the machine boot's up and then juju status executes correct, agent running and all that good stuff. If I 'reboot' the machine (testing power failure/problem whatever), juju status comes back but the agent-state is no longer in running state, and so far I have to destroy the environment and restart. In all cases I have never been able to deploy any services to any of the other nodes. I deploy the service with juju, note which node it was assigned, and then start the system. The system just boots up into a basic node. If I SSH to it I have to enter password, so it's not setting up the ssh key or anything. This is on Ubuntu 12.04.1 LTS systems and HP GL360G7 hosts. The MAAS management server is running as a VM but all on the same network. At this point I am not sure if I am doing something wrong or if there is a problem somewhere else. Is the idea that anytime a host is rebooted it should be rebuilt from the ground up, or is something else going on behind the scene to tell it to boot the local image. If the latter, why doesn't the agent start on a system that has been successfully setup before (juju bootstrapped system)?

    Read the article

  • Enable Hardware Virtualization on HP CompaqDX2420?

    - by 7alwagy
    Hey Guys, After installing vmware7, I tried to run a virtual machine with Mac OSX installed. When I tried to run this virtual machine I got an error message saying: Mac OS X is not supported with software virtualization. To run Mac OS X you need a host on which VMware Workstation supports hardware virtualization. I'v googled and found out that my processor (Intel Core 2 Duo E7500 Processor (2.93 GHz, 3 MB L2 cache, 1066 Mhz FSB)supports Hardware virtualization. Does anyone know how to enable this in order to get this virtual machine running?

    Read the article

  • Blogging from Office RT

    - by Dennis Vroegop
    During the last Build conference all attendees were given a brand new sparkling exciting Surface RT device (I love that machine despite its name but that's beside the point). On it came a version of Office 2013 RT, or better: the preview version. Now, I translated that term "Preview" to "Beta". Which is OK, since I've been using a lot of beta products from Microsoft and they all were great. And then I wanted to post a blogposting from Word. I knew I could, I have been doing this for a long time (I prefer Live Writer but that isn't available on Windows 8 RT). So I wrote the entry and hit "Publish". Instead of my blogsite I got a nice non-descriptive error telling me I couldn't post. So I fired up my other (Intel based) Win8 tablet, opened Word RT Preview, it loaded my blogpost (you've got to love the automatic synchronization through Skydrive) and tried from that machine. Same error. So, I installed Live Writer (remember, the other machine is Intel based) and posted from there. That worked like a charm. Apparently, there was something wrong with Word. I gave up and didn't think about it anymore. Yet… what you're reading now is written in Word 2013 RT on my Surface RT. So what did do? Simple: I updated from the Preview version to the final version. That's all there was to it. So…. If you're still on the preview I urge you to upgrade. You need to go to the "classic desktop update" window instead of going through the Windows Store App style update since Office is a desktop system, but once you do that you'll have the full version as well. Happy blogging!

    Read the article

  • Gitosis on Mac OS X (Snow Leopard)

    - by Shyam
    Hi, I have a Snow Leopard box, where I have gitosis installed (Warning: noob alert), added a git user and I am able to remotely login to the machine with ssh. Locally, I can 'clone' my created repositories, as I can 'clone' the gitosis-admin too. Works perfect. I clone these using the 'git' user. git clone git@my-remote-machine:reponame.git remotely logged in, what doesn't work: git clone git@localhost:reponame.git However on that same remote machine where the repositories live, I can't clone from the localhost. It asks for a password, which wasn't created as far as I know. What am I doing wrong? Thank you for your replies!

    Read the article

  • [Kubuntu 14.04][Eclipse] (ADT) crashes at button OK from Project properties

    - by nouseforname
    Since i upgraded to kubuntu 14.04, my Eclipse crashes at different situations. Mostly i can "simulate" it when going to project properties and press ok. Then it always crashes. My system: DISTRIB_ID=Ubuntu DISTRIB_RELEASE=14.04 DISTRIB_CODENAME=trusty DISTRIB_DESCRIPTION="Ubuntu 14.04.1 LTS" My Java: java version "1.8.0_05" Java(TM) SE Runtime Environment (build 1.8.0_05-b13) Java HotSpot(TM) 64-Bit Server VM (build 25.5-b02, mixed mode) My ADT Version: Android Development Toolkit Version: 23.0.0.1245622 I already tried to add this in adt-bundle-linux-x86_64/eclipse/configuration/configuration.ini org.eclipse.swt.browser.DefaultType=mozilla -Dorg.eclipse.swt.browser.DefaultType=mozilla Error: # # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x00007fe049eb1718, pid=5964, tid=140601811232512 # # JRE version: Java(TM) SE Runtime Environment (8.0_05-b13) (build 1.8.0_05-b13) # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.5-b02 mixed mode linux-amd64 compressed oops) # Problematic frame: # C [libgobject-2.0.so.0+0x19718] g_object_get_qdata+0x18 # # Core dump written. Default location: /home/maddin/core or core.5964 # # An error report file with more information is saved as: # /home/maddin/hs_err_pid5964.log Compiled method (nm) 28866 4166 n 0 org.eclipse.swt.internal.gtk.OS::_g_object_get_qdata (native) total in heap [0x00007fe051da6790,0x00007fe051da6af0] = 864 relocation [0x00007fe051da68b0,0x00007fe051da68f8] = 72 main code [0x00007fe051da6900,0x00007fe051da6ae8] = 488 oops [0x00007fe051da6ae8,0x00007fe051da6af0] = 8 # # If you would like to submit a bug report, please visit: # http://bugreport.sun.com/bugreport/crash.jsp # The crash happened outside the Java Virtual Machine in native code. # See problematic frame for where to report the bug. # Now, as soon as i change SystemSettings - Application Apperance - GTK - GTKn-Design to something else but "oxygen-gtk" this crash doesn't happen anymore. But the application appearance also is ugly. Beside that i get a lot of errors/warnings like that: (SWT:6148): GLib-GObject-CRITICAL **: g_closure_add_invalidate_notifier: assertion 'closure->n_inotifiers < CLOSURE_MAX_N_INOTIFIERS' failed or other GTK warnings from the particular design, not having theme-engine. Which actually doesn't cause any crahs it seems so far. So i have 3 options: accept crashes accept warnings (maybe the best choice) accept ugly design What can i do to solve this issue without changing the design settings?

    Read the article

  • Is the output of Eclipse's incremental java compiler used in production? Or is it simply to support Eclipse's features?

    - by Doug T.
    I'm new to Java and Eclipse. One of my most recent discoveries was how Eclipse comes shipped with its own java compiler (ejc) for doing incremental builds. Eclipse seems to by default output incrementally built class files to the projRoot/bin folder. I've noticed too that many projects come with ant files to build the project that uses the java compiler built into the system for doing the production builds. Coming from a Windows/Visual Studio world where Visual Studio is invoking the compiler for both production and debugging, I'm used to the IDE having a more intimate relationship with the command-line compiler. I'm used to the project being the make file. So my mental model is a little off. Is whats produced by Eclipse ever used in production? Or is it typically only used to support Eclipse's features (ie its intellisense/incremental building/etc)? Is it typical that for the final "release" build of a project, that ant, maven, or another tool is used to do the full build from the command line? Mostly I'm looking for the general convention in the Eclipse/Java community. I realize that there may be some outliers out there who DO use ecj in production, but is this generally frowned upon? Or is this normal/accepted practice?

    Read the article

  • How to properly learn ASP.NET MVC

    - by Qmal
    Hello everyone, I have a question to ask and maybe some of you will think it's lame, but I hope someone will get me on the right track. So I've been programming for quite some time now. I started programming when I was about 13 or so on Delphi, but when I was about 17 or so I switched to C# and now I really like to program with it, mostly because it's syntax is very appealing to me, plus managed code is very good. So it all was good and fun but then I had some job openings that I of course took, but the problem with them is that they all are about web programming. And I had to learn PHP and MVC fundamentals. And I somewhat did while building applications using CI and Kohana framework. But I want to build websites using ASP.NET because I like C# much, much more than PHP. TL;DR I want to know ASP.NET MVC but I don't know where to start. What I want to start with is build some simple like CMS. But I don't know where to start. Do I use same logic as PHP? What do I use for DB connections? And also, if I plan to host something that is build with ASP.NET MVC3 on a hosting provider do I need to buy some kind of license?

    Read the article

  • AuthInfoRequired cups overwrites

    - by mooscape
    My problem is basically identical to the following: http://bbs.archlinux.org/viewtopic.php?id=61826 Put simply, I have a machine in ubuntu trying to connect to another ubuntu machine via a network in order to use the printer attached. There is no problem printing until I restart the guest machine. Immediately it overwrites the printers.conf file (under /etc/cups/printers.conf). It always adds the same line: AuthInfoRequired username,password I stop cups and change it to *#*AuthInfoRequired username,password to comment out the command. Start cups. Works great 'til the next shutdown. Then it gets overwritten again. Googling indicates it may be GTK problem and not CUPS, but I have found no permanent solution to date. Any suggestions appreciated ....

    Read the article

  • How to configure 2nd network card for use in VMWare Workstation?

    - by Timo
    Hi all, I am using VMWare Workstation 6.5, connected to my network with a bridged adapter so that the virtual machine OS (Windows XP) has its own IP adress. This just worked out of the box. Now my host machine (Windows Vista) has an additional network card that is directly connected to another computer using a crossover cable (and fixed IP adress 10.1.1.4, while the "main" network connection is using DHCP with IP in the 192.68.0.* range). How can I use that network connection as well in the virtual machine? Do I need to bridge my 2nd network adapter to some VMnetX adapter? Do I need to add a host virtual adapter? I do not know much about networks, and the VMWare network settings really confuse me :-) Thanks, Timo

    Read the article

  • Booting Error while using 12.04 booting from GRUB

    - by Paul Z.
    my name is Paul. I have encountered an issue relating to GRUB booting and the booting process in general. I have been running Ubuntu 12.04 LTS on my machine for quite a while. Before that, i had (before) 10.04, 11.04, 11.10, etc. I have been running Ubuntu, in general, but more specifically 12.04 for a long time with little to no problems. The problem: Earlier today, i was using my machine and then decided to take a little break. I shut down my machine (laptop, in case anyone was wondering) and left. Later, I came back ready to start it up and continue. I started it up and it took me to the Toshiba screen (like normal) then to the GRUB screen. I guessed that nothing was truly wrong, and chose the first option (something around the lines of: Ubuntu, with linux 3.22.0-35-generic). I waited for a bit and it still displayed the same purple screen. I restarted it and now chose the option like the first but with recovery at the end. Same result. Later, I waited longer and found that my computer came up with a bunch of lines of script. I waited longer but nothing new happened. What are your suggestions as to fix this problem? I will let my computer run overnight with the recovery setting and will let you know what the result is. Until then, please help. Thank you, your time and effort is greatly appreciated!

    Read the article

  • Windows service running as network service - how does it authenticate? Breaking change in W2K8?

    - by Max
    A Windows service running as "Network Service" talks to services on other machines (here: SQL Server and Analysis Services), using Windows authentication. For authentication, we have to grant permissions to the machine account of the service. E.g. if service runs on server MYSERVER in domain MYDOMAIN, it'll authenticate itself as "MYDOMAIN\MYSERVER$". - Am I correct, so far? Now here's my question: does this still apply when talking to a service on the SAME machine? Or will it authenticate with something like "NT AUTHORITY\Network Service" instead when connecting to a local service? And: is there any chance this is a breaking change from Windows 2003 to Windows 2008? We're having an actual issue in our system where the account was able to connect to local services with only the machine account having permissions in W2K3. In W2K8, this doesn't seem to work anymore: authentication to local services now fails, but still works to remote machines.

    Read the article

  • Importing Outlook 2007 rules error

    - by Alex
    I'm trying to move an Outlook 2007 account (POP3, no Exchange) to a new machine and I'm having trouble importing the rules from the old machine to the new one. Here is the deal, I imported the .pst file on the new machine but when I try to import the rules, every single one of them brakes. The folders and sub-folders hierarchy is preserved upon the import of the .pst but the rules don't point to the right folder in the respective rule. Instead it points to "the specified folder". Same OS (Windows XP), same mail client (Outlook 2007) and the .psf file is about 8 GB. Any help i greatly appreciated.

    Read the article

  • Why does Windows 7 need hardware virtualization to run XP mode?

    - by Ken Pespisa
    I have a MacBook Pro and I've run VMware Fusion's unity mode and Parallels' cohesion mode along side the Mac OS X, and both work pretty seamlessly. I figured XP Mode in Windows 7 would be something similar, but I then learned my machine requires hardware virtualization support, which it does not have. My machine is an HP dc7800. That's a dual core 2.2GHz machine with 4GBs of RAM. Certainly it has the horsepower to run a virtual environment alongside the primary OS. I'm wondering: 1) Why Microsoft decided to make hardware virtualization a requirement and 2) What am I missing? Is the experience similar to Parallel's cohesion mode / Fusion's unity mode? Thanks!

    Read the article

  • How can we control the device driver load order in Windows XP ?

    - by Deniz
    We are trying to install both an HP scanner and a Kodak scanner on an XP machine and the problem is that that HP's scanner driver always places itself into the first and default scanner device position. We are using an automation software on that machine which must use the Kodak scanner as the default scanner. The HP flatbed isn't needed to be the default scanner on that machine. It will always be used manually and through its own software. Our developers did some research 2 years ago couldn't find a solution. Nowadays we are preparing to expand our user base and this problem did surface again on machines with the same configuration. I did a more aggresive search pn the net and found some tips like this one : http://support.microsoft.com/kb/115486 I couldn't realize how to suit this solution to my scenario. Could someone please point me to the right direction ?

    Read the article

  • Mouse doesn't work & internet connection not made in Ubuntu 12.04 LTS

    - by David Skare
    Yesterday, Nov 15, 2012, I booted into my Ubuntu 12.04 LTS system. It has resided on a Crucial 128 GB SSD with about 90% free space since early summer. I also have Windows 7 loaded on another Crucial 256 GB SSD. Ubuntu has set up a dual boot system for me even though each OS has its own SSD. I have been using this setup without problems since summer. Yesterday, when the boot process finished, my Microsoft Comfort Mouse 3000 did not work and there was a message that Ubuntu was not connected to the internet. So w/o the mouse I was forced to turn the machine off manually. About 4 days ago Ubuntu worked fine and booting into Win 7 also works fine. I have a backup machine with the same style mouse on it so I swapped the mouse onto this system. Same results. But both mice work when booting into Win 7. Today I removed both SSDs and installed my Ubuntu 12.04 HD which has not been used since I moved Ubuntu to the SSD from it. Same results. Between the last time I used Ubuntu 12.04 on the SSD and when I tried to use it again I made no changes to my machine, either hardware or software. My machines specs are: AMD FX-6100, MSI 990FXA-GD65 AM3+ format with latest BIOS (Ver 19.9), Corsair Vengeance 1866 MHz memory - 16 GB (4GB X 4 sticks), MSI N580GTX video card (nVidia 306.97 drivers), Sony Bravia 32" HD TV as a monitor, Pioneer BluRay DVD-RW, DSL connection to internet thru a router (10 mps), Crucial 128 GB SSD (90% free space), Microsoft Comfort Mouse 3000 I try to maintain current BIOS and drivers for all devices. I mostly use my Ubuntu system for programming in GCC and OpenCOBOL, surfing the internet and e-mailing. No games are installed. I'm stumped! If anyone has experienced this same problem I'd appreciate knowing how you solved it. TIA, Dave

    Read the article

< Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >