Search Results

Search found 17856 results on 715 pages for 'setup py'.

Page 180/715 | < Previous Page | 176 177 178 179 180 181 182 183 184 185 186 187  | Next Page >

  • "Cannot import name genshi" error when installing the Swab library

    - by ATMathew
    I'm trying to install the Swab library for Python 2.6 in Ubuntu 10.10. However, I get the following error messages when I try to import it. In the terminal I ran: sudo easy_install swab sudo easy_install Genshi In the Python interpreter I ran: >>> import swab Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.6/dist-packages/swab-0.1.2-py2.6.egg/swab/__init__.py", line 23, in <module> from pestotools.genshi import genshi, render_docstring ImportError: cannot import name genshi I don't know whats going on. can anyone help.

    Read the article

  • Planning home network

    - by gakhov
    I'm planning to setup my home network from scratch and want to ask professional opinions or tips. My home is connected to Internet with a cable connection (100 Mb/s). The devices I would like to connect are VoIP phone (RJ-45), TV (WiFi/LAN), 3 laptops (WiFi), 2 smartphones (WiFi), an iPad (WiFi), a Kindle (WiFi), a network printer and, probably, a home media storage (WiFi/LAN). As you can see, the most load will be on WiFi connections (probably, even if TV supports WiFi it's better to connect it by LAN?). So, I need help to choose the best router (or combination of routers) to support stable connections for all these devices and minimize the total number of routers/adapters. I like how Cisco/Linksys devices were working for me in the past, so preferably (but not obligatorily) I want to setup network with their solutions. Any thoughts?

    Read the article

  • Problem with XeTeX (LaTeX) and system fonts

    - by mghg
    I have started to use an enterprise specific class for LaTeX, but have got a problem with usage system fonts in Ubuntu. The class uses the fontspec package, I have therefore been instructed to use XeTeX (i.e. the command xelatex instead of latex or pdflatex). However, the command xelatex testfile.tex results in the following message: ! Package xkeyval Error: `TeX' undefined in families `Ligatures'. See the xkeyval package documentation for explanation. Type H <return> for immediate help. ... l.61 \newfontfamily\headfont{Arial} ? The class has previously been used on Mac and Windows and the font setup is as follows: \newfontfamily\headfont{Arial} \newcommand\texthead[1]{\headfont #1} \setromanfont{Georgia} \setmainfont{Georgia} \setsansfont[Scale=MatchLowercase]{Verdana} It has been suggested that since XeTeX makes use of system fonts and the class file has worked flawlessly on Mac and Windows, the problem might be that Arial is not a name used in Ubuntu. I have tried to exchange Arial with Ubuntu Light in the setup code above, but that have not been any improvement. Any suggestions please on how to move forward?

    Read the article

  • USB-creator: Error erasing device: Unknown or unsupported erase type

    - by Mike Williamson
    I created a live usb using usb-creator-gtk. I installed Ubuntu with it and all was good with the world. Now I am trying to use the same memory stick and create a live USB for 14.04 and I get the following error when trying to erase the disk. org.freedesktop.DBus.Python.gi._glib.GError: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/dbus/service.py", line 707, in _message_cb retval = candidate_method(self, *args, **keywords) File "/usr/share/usb-creator/usb-creator-helper", line 239, in Format block.call_format_sync('dos', GLib.Variant('a{sv}', {'erase': GLib.Variant('s', '')}), None) gi._glib.GError: GDBus.Error:org.freedesktop.UDisks2.Error.Failed: Error erasing device: Unknown or unsupported erase type `' How can I fix this so I can create a new live USB?

    Read the article

  • Shrink Partition on Production Server

    - by Campo
    SO our production server was only setup with one large partition. I have setup a standby server and properly partitioned it. Now the boss wants the production environment's partition shrunk. It is an HP DL380 G5 We have 4 hot swap drives in a raid 5. How best should I go about doing this. Seems like a bad idea to me. Should I use windows or HP to do the partitioning? What should I be aware of in a production environment? The idea is to put the site (Inetpub) on a separate partition instead of the C: drive. How much downtime should I expect? Is this a terrible idea? Anything else I have missed?

    Read the article

  • Powershell: Install-dotNET4 function

    - by marc dekeyser
    This function will download and install ,NET 4.0. It uses the Get-Framework-Versions function to determine if the installation is necessary or not. Internet Connectivity will be required as the script auto downloads the setup file (and sleeps for 360 seconds... I had a function in there to monitor for install completion at first, turns out the setup file spawns so many childprocesses the function just got confused and locked up -_-)Alternatively you could drop the installation file in the folder specified on the $folderPath variable too. That will skip the download and use the file. This function easily adapts in to other versions f.e. I use it for Powershell 3 installs as well!Function install-dotNet4 () {    if(($InstalledDotNET -eq "4.0") -or ($InstalledDotNET -eq "4.0c")){        write-host ".NET 4.0 Framework is already installed" -foregroundcolor Green    } else{            #set a var for the folder you are looking for        $folderPath = 'C:\Temp'        #Check if folder exists, if not, create it        if (Test-Path $folderpath){            Write-Host "The folder $folderPath exists." -ForeGroundColor Green        } else{            Write-Host "The folder $folderPath does not exist, creating..." -NoNewline -ForegroundColor Red            New-Item $folderpath -type directory | Out-Null            Write-Host " - done!" -ForegroundColor Green        }        # Check if file exists, if not, download it        $file = $folderPath+"\dotNetFx40_Full_x86_x64.exe"        if (Test-Path $file){            write-host "The file $file exists." -ForeGroundColor Green        } else {            #Download Microsoft .Net 4.0 Framework            Write-Host "Downloading Microsoft .Net 4.0 Framework..." -nonewline -ForeGroundColor DarkYellow            $clnt = New-Object System.Net.WebClient            $url = "http://download.microsoft.com/download/9/5/A/95A9616B-7A37-4AF6-BC36-D6EA96C8DAAE/dotNetFx40_Full_x86_x64.exe"            $clnt.DownloadFile($url,$file)            Write-Host " - done!" -ForegroundColor Green        }        #Install Microsoft .Net Framework        Write-Host "Installing Microsoft .Net Framework..." -nonewline -ForegroundColor DarkYellow        $dotNET4 = $folderPath+"\dotNetFx40_Full_x86_x64.exe /quiet /norestart"        Invoke-Expression $dotNET4        write-host " - done!" -ForegroundColor Green        start-sleep -seconds 360    }}

    Read the article

  • cloud/grid computing

    - by tom smith
    Hi guys. I'm appologizing in advance to the guys who will tell me this isn't a tech/server/IT issue! But I've been beating my head around this for a couple of days now. I'm trying figure out who to talk to, or which company I can approach to try to see if there are Grid/Cloud Computing companies who have programs setup to deal with colleges. I'm dealing with a compsci course, and we're looking at a few projects that would require a great deal of computing/computational resources. But in calling different companies (HP/Rackspace/etc..) I'm either not getting through to the right depts, or to the right people, or the companies just aren't setup for this. There are plenty of companies who have discounts for desktop software/hardware, but who in the biz deals with discounts/offerings for Cloud/Grid Computing solutions?? Any thoughts/pointers would be greatly appreciated. Thanks -tom

    Read the article

  • Logstash shipper & server on the samebox

    - by keftes
    I'm trying to setup a central logstash configuration. However I would like to be sending my logs through syslog-ng and not third party shippers. This means that my logstash server is accepting via syslog-ng all the logs from the agents. I then need to install a logstash process that will be reading from /var/log/syslog-clients/* and grabbing all the log files that are sent to the central log server. These logs will then be sent to redis on the same VM. In theory I need to also configure a second logstash process that will read from redis and start indexing the logs and send them to elasticsearch. My question: Do I have to use two different logstash processes (shipper & server) even if I am in the same box (I want one log server instance)? Is there any way to just have one logstash configuration and have the process read from syslog-ng --- write to redis and also read from redis --- output to elastic search ? Diagram of my setup: [client]-------syslog-ng--- [log server] ---syslog-ng <----logstash-shipper --- redis <----logstash-server ---- elastic-search <--- kibana

    Read the article

  • AWS SSL Load Balancer

    - by Jay Francis
    OK, I am looking for some pointers. Basically I have a white-label app/site that will allow users to setup their own domain to use for their customer front-end. We have 2 dedicated servers and a load balancer. The problem is SSL, we were thinking about using AWS ELB to handle the SSL loadbalancing, but cant seem to figure out if it will properly handle it, it seems to be setup to work with EC2 instances, but we are using externally hosted servers via a loadbalancer. A blog post by AWS looks similar to what we need but it only seems to work with EC2 instances. http://aws.typepad.com/aws/2011/08/elastic-load-balancer-ssl-support-options.html Anyone had experience setting ELS SSL load balancers up to work with external servers?

    Read the article

  • Constant game speed independent of variable FPS in OpenGL with GLUT?

    - by Nazgulled
    I've been reading Koen Witters detailed article about different game loop solutions but I'm having some problems implementing the last one with GLUT, which is the recommended one. After reading a couple of articles, tutorials and code from other people on how to achieve a constant game speed, I think that what I currently have implemented (I'll post the code below) is what Koen Witters called Game Speed dependent on Variable FPS, the second on his article. First, through my searching experience, there's a couple of people that probably have the knowledge to help out on this but don't know what GLUT is and I'm going to try and explain (feel free to correct me) the relevant functions for my problem of this OpenGL toolkit. Skip this section if you know what GLUT is and how to play with it. GLUT Toolkit: GLUT is an OpenGL toolkit and helps with common tasks in OpenGL. The glutDisplayFunc(renderScene) takes a pointer to a renderScene() function callback, which will be responsible for rendering everything. The renderScene() function will only be called once after the callback registration. The glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0) takes the number of milliseconds to pass before calling the callback processAnimationTimer(). The last argument is just a value to pass to the timer callback. The processAnimationTimer() will not be called each TIMER_MILLISECONDS but just once. The glutPostRedisplay() function requests GLUT to render a new frame so we need call this every time we change something in the scene. The glutIdleFunc(renderScene) could be used to register a callback to renderScene() (this does not make glutDisplayFunc() irrelevant) but this function should be avoided because the idle callback is continuously called when events are not being received, increasing the CPU load. The glutGet(GLUT_ELAPSED_TIME) function returns the number of milliseconds since glutInit was called (or first call to glutGet(GLUT_ELAPSED_TIME)). That's the timer we have with GLUT. I know there are better alternatives for high resolution timers, but let's keep with this one for now. I think this is enough information on how GLUT renders frames so people that didn't know about it could also pitch in this question to try and help if they fell like it. Current Implementation: Now, I'm not sure I have correctly implemented the second solution proposed by Koen, Game Speed dependent on Variable FPS. The relevant code for that goes like this: #define TICKS_PER_SECOND 30 #define MOVEMENT_SPEED 2.0f const int TIMER_MILLISECONDS = 1000 / TICKS_PER_SECOND; int previousTime; int currentTime; int elapsedTime; void renderScene(void) { (...) // Setup the camera position and looking point SceneCamera.LookAt(); // Do all drawing below... (...) } void processAnimationTimer(int value) { // setups the timer to be called again glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0); // Get the time when the previous frame was rendered previousTime = currentTime; // Get the current time (in milliseconds) and calculate the elapsed time currentTime = glutGet(GLUT_ELAPSED_TIME); elapsedTime = currentTime - previousTime; /* Multiply the camera direction vector by constant speed then by the elapsed time (in seconds) and then move the camera */ SceneCamera.Move(cameraDirection * MOVEMENT_SPEED * (elapsedTime / 1000.0f)); // Requests to render a new frame (this will call my renderScene() once) glutPostRedisplay(); } void main(int argc, char **argv) { glutInit(&argc, argv); (...) glutDisplayFunc(renderScene); (...) // Setup the timer to be called one first time glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0); // Read the current time since glutInit was called currentTime = glutGet(GLUT_ELAPSED_TIME); glutMainLoop(); } This implementation doesn't fell right. It works in the sense that helps the game speed to be constant dependent on the FPS. So that moving from point A to point B takes the same time no matter the high/low framerate. However, I believe I'm limiting the game framerate with this approach. Each frame will only be rendered when the time callback is called, that means the framerate will be roughly around TICKS_PER_SECOND frames per second. This doesn't feel right, you shouldn't limit your powerful hardware, it's wrong. It's my understanding though, that I still need to calculate the elapsedTime. Just because I'm telling GLUT to call the timer callback every TIMER_MILLISECONDS, it doesn't mean it will always do that on time. I'm not sure how can I fix this and to be completely honest, I have no idea what is the game loop in GLUT, you know, the while( game_is_running ) loop in Koen's article. But it's my understanding that GLUT is event-driven and that game loop starts when I call glutMainLoop() (which never returns), yes? I thought I could register an idle callback with glutIdleFunc() and use that as replacement of glutTimerFunc(), only rendering when necessary (instead of all the time as usual) but when I tested this with an empty callback (like void gameLoop() {}) and it was basically doing nothing, only a black screen, the CPU spiked to 25% and remained there until I killed the game and it went back to normal. So I don't think that's the path to follow. Using glutTimerFunc() is definitely not a good approach to perform all movements/animations based on that, as I'm limiting my game to a constant FPS, not cool. Or maybe I'm using it wrong and my implementation is not right? How exactly can I have a constant game speed with variable FPS? More exactly, how do I correctly implement Koen's Constant Game Speed with Maximum FPS solution (the fourth one on his article) with GLUT? Maybe this is not possible at all with GLUT? If not, what are my alternatives? What is the best approach to this problem (constant game speed) with GLUT? I originally posted this question on Stack Overflow before being pointed out about this site. The following is a different approach I tried after creating the question in SO, so I'm posting it here too. Another Approach: I've been experimenting and here's what I was able to achieve now. Instead of calculating the elapsed time on a timed function (which limits my game's framerate) I'm now doing it in renderScene(). Whenever changes to the scene happen I call glutPostRedisplay() (ie: camera moving, some object animation, etc...) which will make a call to renderScene(). I can use the elapsed time in this function to move my camera for instance. My code has now turned into this: int previousTime; int currentTime; int elapsedTime; void renderScene(void) { (...) // Setup the camera position and looking point SceneCamera.LookAt(); // Do all drawing below... (...) } void renderScene(void) { (...) // Get the time when the previous frame was rendered previousTime = currentTime; // Get the current time (in milliseconds) and calculate the elapsed time currentTime = glutGet(GLUT_ELAPSED_TIME); elapsedTime = currentTime - previousTime; /* Multiply the camera direction vector by constant speed then by the elapsed time (in seconds) and then move the camera */ SceneCamera.Move(cameraDirection * MOVEMENT_SPEED * (elapsedTime / 1000.0f)); // Setup the camera position and looking point SceneCamera.LookAt(); // All drawing code goes inside this function drawCompleteScene(); glutSwapBuffers(); /* Redraw the frame ONLY if the user is moving the camera (similar code will be needed to redraw the frame for other events) */ if(!IsTupleEmpty(cameraDirection)) { glutPostRedisplay(); } } void main(int argc, char **argv) { glutInit(&argc, argv); (...) glutDisplayFunc(renderScene); (...) currentTime = glutGet(GLUT_ELAPSED_TIME); glutMainLoop(); } Conclusion, it's working, or so it seems. If I don't move the camera, the CPU usage is low, nothing is being rendered (for testing purposes I only have a grid extending for 4000.0f, while zFar is set to 1000.0f). When I start moving the camera the scene starts redrawing itself. If I keep pressing the move keys, the CPU usage will increase; this is normal behavior. It drops back when I stop moving. Unless I'm missing something, it seems like a good approach for now. I did find this interesting article on iDevGames and this implementation is probably affected by the problem described on that article. What's your thoughts on that? Please note that I'm just doing this for fun, I have no intentions of creating some game to distribute or something like that, not in the near future at least. If I did, I would probably go with something else besides GLUT. But since I'm using GLUT, and other than the problem described on iDevGames, do you think this latest implementation is sufficient for GLUT? The only real issue I can think of right now is that I'll need to keep calling glutPostRedisplay() every time the scene changes something and keep calling it until there's nothing new to redraw. A little complexity added to the code for a better cause, I think. What do you think?

    Read the article

  • Is real-time or synchronous replication possible over WAN link?

    - by johnnyb10
    The company I work for is looking to implement truly real-time file replication with file locking over a WAN link that spans over 2000 miles. We currently have a 16-drive SAN setup in our east coast office. We also have an office out in Colorado that will have the same exact SAN setup. The idea is to have those two SANs contain the same exact data at all times, which will allow us to work with the same data pool, and which will also provide use with an offsite backup solution, should a failure occur on either end. We're running Server 2008. The objective is to enable users in the east coast office to work on files and have those changes be instantly updated on the Colorado SAN as well. We also need there to be file locking so that there will be no conflicts or overwritten changes if users attempt to work on the same file. Is this scenario even possible, at speeds that would make the files usable? And if so, what software would we need to pull this off? As I understand it, DFS-R does not provide file locking, so if we used that, we would need to go with a third-party product like Peerlock. But I don't even know if DFS-R is an option. Can it replicate quickly enough over a WAN link? Can any product? It seems that if we were to use synchronous replication, the programs would be unacceptably slow, as every write would have to wait for confirmation from the other end of the link. But if we used asynchronous replication, what kind of latency would we be looking at? There is a product from GlobalScape called WAFS that claims to provide "File coherence with real-time file locking, file release, and synchronization" and says that "As files are modified, changes are mirrored instantly using intelligent byte-level differencing to minimize the impact on network bandwidth". So this sounds like synchronous replication, but that doesn't even seem possible, given physical limitations such as the speed of light. If anyone has any experience with this kind of setup, or knows whether it's even possible, I'd appreciate your input and suggestions, including recommendations for software that we should check out.

    Read the article

  • setting up rhel 5.x RPM build server for mortal users

    - by Chen Levy
    My task is to setup a RHEL 5.x build host, that can build RPMs for mortal users. On RHEL 6.x with rpm version 4.8, I have in /usr/lib/macros: # Path to top of build area. %_topdir %{getenv:HOME}/rpmbuild On RHEL 5.x with rpm version 4.4, the %{getevn:HOME} is not available. I know that I can use /home/someuser/.rpmmacros: %_topdir /home/someuser/rpmbuild and this will work for that user, however I don't want to do this for every user separately. Moreover, since .rpmmacro will not expand ${HOME} or ~ I suspect it is unsafe to use those. This in turn make /etc/skel unstable for this task (or so I suspect). So in short, my question is: How to setup RHEL 5.x host that allow all users to build RPM packages in their home directory?

    Read the article

  • Building a new PC, Installing XP, blue screen of death

    - by Tim
    I got a gigabyte barebones kit and am installing windows-XP(SP1) and the initial setup works, then it restarts and goes into the second phase of the setup. Then when installing components (I think that's what it says) it gets half way done and comes up with a blue screen saying IRQL_NOT_LESS_OR_EQUAL. BUT! I had gotten past that by installing windows-XP media center addition Now I am trying to install the drivers for my Asus ATI Radeon 5770 graphics card and I get another blue screen of death that doesnt give much info something about address x0000005. Would you think there is something wrong with something in my system or do you think if I got windows 7 that would take care of things? Sorry for probably not giving enough info. Here is what I have MotherBoard - Gigabyte S-series GA-H55M-S2(v) PSU - Ultra 500 watt atx HDD - Sata serial ATA Seagate Baracudda 7200 CPU - Intel i3 Memory - 4gig crucial Graphics Card - Asus ATI Radeon 5770 1Gig DDR5

    Read the article

  • Openfiler RAID 10 option not found

    - by chrisling106
    Hi, I'm building a NAS using Openfiler2.3 (from 32-bit ISO), first of all I want to experiment it on VM first before going out and buy the harddrives needed. I created 5 virtual drives on VMware, sda is 2GB and the rest 1GB each (sdb to sde). I left sda blank and want to setup a RAID 10 disk using sdb, sdc, sdd and sde, 4 RAID partitions are setup successfully, but when I try to create a RAID device the only option for RAID level is 1, 0, 5 and 6. RAID 10 is not there! Can someone let me know what have I missed, please? TIA.

    Read the article

  • How Ubuntu cloud version enforces the "no root login" over ssh ?

    - by Maxim Veksler
    Hello, I'm looking to tweak ubuntu cloud version default setup where is denies root login. Attempting to connect to such machine yields: maxim@maxim-desktop:~/workspace/integration/deployengine$ ssh [email protected] The authenticity of host 'ec2-204-236-252-95.compute-1.amazonaws.com (204.236.252.95)' can't be established. RSA key fingerprint is 3f:96:f4:b3:b9:4b:4f:21:5f:00:38:2a:bb:41:19:1a. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'ec2-204-236-252-95.compute-1.amazonaws.com' (RSA) to the list of known hosts. Please login as the ubuntu user rather than root user. Connection to ec2-204-236-252-95.compute-1.amazonaws.com closed. I would like to know where this is setup and how I can change the printed message? Thank you, Maxim.

    Read the article

  • How can I force Parallels' networking to obtain an IP through a wireless router?

    - by RLH
    Here is my setup. I have a Macbook, Thunderbolt display and an Ethernet connection plugged into the Thunderbolt display. During the day, most of my network use can (and should) operate across the ethernet associated with my display. However, I also need to be able to connect up to a wireless router. This hasn't been a problem on the Mac OS X side, but the program that I need to run on the router has to obtain an IP address from the wireless access point. Considering my current setup, how can I leave it so that I can access the internet in OS X, yet have my Window 7 instance running in Parallels, get it's assigned IP address from a wireless router that my Mac is also connected to? I've fiddled around with the Parallel's network settings for an hour, and I can't get Parallel's to see the router, even though my Mac is certainly connected to it.

    Read the article

  • One bigger Virtual Machine distributed across many OpenStack nodes [duplicate]

    - by flyer
    This question already has an answer here: Can a virtualized machine have the CPU and RAM resources of multiple underlying physical machines? 2 answers I just setup virtual machines on one hardware with Vagrant. I want to use a Puppet to configure them and next try to setup OpenStack. I am not sure If I am understanding how this should look at the end. Is it possible to have below architecture with OpenStack after all where I will run one Virtual Machine with Linux? ------------------------------- | VM with OS | ------------------------------- | NOVA | NOVA | NOVA | ------------------------------- | OpenStack | ------------------------------- | Node | Node | Node | ------------------------------- More details: In my environment Nodes are just virtual machines, but my question concerns separate Hardware nodes. If we imagine this Nodes(Novas) are placed on a separate machines (e.g. every has 4 cores) can I run one Virtual Machine across many OpenStack Nodes? Is it possible to aggregate the computation power of OpenStack in one virtual distributed operating system?

    Read the article

  • Resetup kernel for virtualbox and now ubuntu is in initramfs state

    - by UbuntuMan
    My 10.04LTS became Read-Only so I wanted to reboot this virtual machine. Then I couldn't get it back up running. VirtualBox threw Result Code: NS_ERROR_FAILURE (0x80004005) error. I did the kernel re-setup: sudo /etc/init.d/vboxdrv setup * Stopping VirtualBox kernel modules [ OK ] * Uninstalling old VirtualBox DKMS kernel modules [ OK ] * Trying to register the VirtualBox kernel modules using DKMS [ OK ] * Starting VirtualBox kernel modules [ OK ] I can't even use sudo in this state. What can I do? (initframfs) /dev/sda1 /bin/sh: /dev/sda1: Permission denied (initframfs) /dev/sda2 /bin/sh: /dev/sda3: Permission denied (initframfs) /dev/sda3 /bin/sh: /dev/sda1: not found I have a similar image so I can check the disks setting if needed. Please help me. Thanks. I have an older version of VirtualBox: 4.0.8 something like that. But other vms on the same VirtualBox are working fine. UPDATE I can hold down SHIFT key and see the GURU menu. Only one kernel exists. Ubuntu(...) Ubuntu(recovery) master test master test

    Read the article

  • Software center not opening

    - by kishore kumar
    $ software-center 2012-09-07 18:45:04,349 - softwarecenter.fixme - WARNING - logs to the root logger: '('/usr/lib/python2.7/dist-packages/dbus/proxies.py', 410, '_introspect_error_handler')' 2012-09-07 18:45:04,349 - dbus.proxies - ERROR - Introspect error on :1.128:/com/ubuntu/Softwarecenter: dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. 2012-09-07 18:45:29,406 - softwarecenter.ui.gtk3.app - INFO - setting up proxy 'None' 2012-09-07 18:45:29,409 - softwarecenter.db.database - INFO - open() database: path=None use_axi=True use_agent=True 2012-09-07 18:45:29,822 - softwarecenter.backend.reviews - WARNING - Could not get usefulness from server, no username in config file 2012-09-07 18:45:29,973 - softwarecenter.ui.gtk3.app - INFO - show_available_packages: search_text is '', app is None. 2012-09-07 18:45:29,991 - softwarecenter.db.pkginfo_impl.aptcache - INFO - aptcache.open() Killed

    Read the article

  • Sonicwall NAT Policy Loopback

    - by John
    I have an issue and am pretty perplexed over it. I have a sonicwall and its setup with NAT polices and reflexive nat for an internal web server. That is, only 2 policies, no loopback policy, and the internal clients can access the web server by public ip no problems. Now, on another connection, another sonicwall, i have the exact same setup for another web server, with exact same policies (obviously different IP's) and the internal clients can't access the internal website by its public IP without creating the loopback policy. Maybe on the first one I've overlooked it, but I don't see any loopback what so ever and its working fine. My question is, does anyone know why the first one works like this but the second one needs the loopback policy? Thanks

    Read the article

  • Cloud computing - database loading question

    - by workwise
    Following is the situation, I want to know whether what I want is possible in cloud computing and is it the best way for me: 1) My main site has a Database with tables with millions of rows, and entries are added almost every second. 2) I will setup a mysql mirror, so there will be a backup database always in sync with the main one. 3) There are few tens of thousands of images- growing. So say total size of images few tens of gigabytes. I will be keeping the image data also in sync on the backup server. 4) There can be short periods where traffic can go 100X the average traffic. 5) I will be using memcache heavily - most database and even frequently used disk files/images will be in RAM. I want that the main site runs on a dedicated server. The backup server is say an Amazon EC2 instance. Now note that since it is live backup, I need to run a small instance continuously. I want that when I anticipate high traffic, I should be able to run a large instance on the cloud and transfer the traffic there. The main point is - I do not want to spend time in "loading" the database on the large instance, as it typically can take few minutes or even hours (experience). So is it possible to just scale the memory/CPU on demand, and not having to load the database or sync up the filesystem? I want to setup my backup scripts etc just ONCE. Thanks JP

    Read the article

  • IIS- defining a website as a dev site

    - by Lock
    I am new to IIS. Is there a way during the setup of IIS to have a variable of some sort set that I can use to tell my site that this is the development copy? I am using PHP via IIS 7.5 and would like to have a file with a few lines that define which databases etc is used by my application. Is this the purpose of web.config? I would love there to be a place in the setup of the website where I can set a few variables that are accessibly by my application. That way, when I migrate files to live, I don't need to worry about access details to databases etc.

    Read the article

  • What is the easiest and cleanest way to create a chrooted SFTP on Centos 5.4?

    - by benjisail
    Hi, I would like to setup a SFTP with chroot (or equivalent) login to my Centos 5.4 server in a clean way. By clean way I mean by using only the YUM command if possible and with something easy to maintain and easy to extend (for example an easy way to add an extra SFTP user). The problem with CentOS 5.4 is that OpenSSH is at version 4.3 in the repository so it is not possible to use the built in chroot capabilities of OpenSSH 4.8+. Installing RSSH required to create manually a chrooted directory which don't seems easy to maintain to me. MySecureShell is an other solution but it require an higher version of openSSL than the one which is in the repository. I know that I could install manually an higher version of OpenSSH but I would lose all the advantage of the Yum command and it could become tricky to maintain if I want to do some updates in the futur... Do you have an easy and clean way to setup a chrooted SFTP login on a centOS 5.4 server? Thanks!

    Read the article

  • how to configure my internal dns to resolve external resources

    - by Ralph Shillington
    I have an internal DNS as part of my AD setup. I have an hosted DNS for public resources (which are typically at some data centre somewhere) Occasionally while on our internal network I need to get to a public resource --- for example www.ourcompany.com since there isn't a www record in our internal DNS I cant get the name resolved. How do I configure my DNS to forward names it doesn't recognise to the public DNS. Update: As per the comment yes I have a "split-horizon" dns (which seemed like a good idea at the time) This AD setup is less than 24 hours old, and can be redone if need be -- (although I would rather not)

    Read the article

  • How to mount an external HDD?

    - by Slash
    I have Ubuntu Linux 12.04 version the latest right now.I want to mount an external HDD NTFS 1TB.I have followed many guides but still no success.The error I'm getting is this: Failed to read last sector (1953523119): Invalid argument HINTS: Either the volume is a RAID/LDM but it wasn't setup yet, or it was not setup correctly (e.g. by not using mdadm --build ...), or a wrong device is tried to be mounted, or the partition table is corrupt (partition is smaller than NTFS), or the NTFS boot sector is corrupt (NTFS size is not valid). Failed to mount '/dev/sdb1': Invalid argument The device '/dev/sdb1' doesn't seem to have a valid NTFS. Maybe the wrong device is used? Or the whole disk instead of a partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around? Using Storage Device MAnager i get this error:Error mounting: mount exited with exit code 1: helper failed with: mount: only root can mount /dev/sdb1 on /media/Skliros_Diskos {external disk name} When I use sudo fdisk -l, this is the output: Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000e0bc6 Device Boot Start End Blocks Id System /dev/sda1 * 2048 618854399 309426176 83 Linux /dev/sda2 618856446 625141759 3142657 5 Extended /dev/sda5 618856448 625141759 3142656 82 Linux swap / Solaris Disk /dev/sdb: 1000.2 GB, 1000202043392 bytes 255 heads, 63 sectors/track, 121600 cylinders, total 1953519616 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0002093a Device Boot Start End Blocks Id System /dev/sdb1 2048 1953525167 976761560 7 HPFS/NTFS/exFAT

    Read the article

< Previous Page | 176 177 178 179 180 181 182 183 184 185 186 187  | Next Page >