Search Results

Search found 26810 results on 1073 pages for 'fixed point'.

Page 586/1073 | < Previous Page | 582 583 584 585 586 587 588 589 590 591 592 593  | Next Page >

  • How to restore default iPod playlists on Amarok?

    - by obvio171
    I wanted to "reset" the collection on my iPod and ended up accidentally deleting, through Amarok, all the playlists, including the default ones like "Most Played" and "Highest Rated". Since these are dynamic playlists with a special meaning for iPod, I don't think creating new, normal playlists with the same name will bring their special behavior back. How do I restore them with the same dynamic functionality? Is there a way to do that on Amarok? Rhythmbox? GTKPod? Command line? P.S.: not entirely sure what the policy about iPod questions are, but this one in particular seems to me to be very computer-related because, although it's about interfacing with a device, everything has to be done on my computer, using standard PC libraries/programs, etc. If it's still off-topic, please point me to where I could post it.

    Read the article

  • Having the same texture data in different ID3D11Texture2D

    - by bdmnd
    Sorry if this has been answered elsewhere - I'm rather new to DX. My question concerns conservation of resources - specifically textures in VRAM. I assume that upon returning from a call to CreateTexture2D, a copy of any textures data supplied has been copied elsewhere, likely VRAM. Does DX11 have any facility for having multiple ID3D11Texture2D objects which point to the same data? This might at first seem silly, but imagine a ID3D11Texture2D which is an array of textures. In one material, an artist has chosen to blend three identically sized maps, saved on disk as A.dds, B.dds, and C.dds. Then imagine they have another material which also uses three maps, but this time A.dds, B.dds, and D.dds. The shader code knows the diffuse texture is a texture array, and also has the number of layers baked (three in each case). I would essentially like to set up just two ID3D11Texture2D objects, one for each material, but I don't want to waste VRAM for two identical copies of A.dds and B.dds. I could use explicit texture arrays, of course, but this reduces the number of resources available to the shader and can complicate code somewhat more than would otherwise be needed.

    Read the article

  • Implementing AS 2.0 with FSM?

    - by Up2u
    i have seen many of references of AI and FSM like : http://www.richardlord.net/blog/fini...n-actionscript and sadly im still can't understand the point of the FSM on AS2.0 is it a must to create a class of each state ? i have a project of game and also it has an AI, the AI has 3 state n i said the state is distanceCheck, ChaseTarget, and Hit the target, the game that i create is FPS game and play via by mouse so what i mean is i have create an AI ( and is success ) but i want to convert it to FSM method ... i create : function of CheckDistanceState() and in that function i have to locked the target with an array, and sort it with the nearest distance and locked it and it trigger the function ChaseState(), and in the ChaseState() i insert the Hit() function to destroy the enemy, the 3 function that i created , i call it in the AI_cursor.onEnterframe, ( FPS game that only have a cursor in stage ) is there any chance to implement FSM to my code without to create a class ?? from what i read before , to create a class mean to create an external code outside of the frame ( i used to code in frame) and i stil dont understand about it. sorry if my explaination not clear ...

    Read the article

  • print job doesn't go to print queue

    - by flatsguide
    I have two printers hooked up to my intel imac. I am having a printing error. It seems that whenever i try to print (a simple text doc) I am unable to get the print job to go to print queue with one of my printers. I have a HP c7280 and a HP c3100. I am able to get one working properly, but the other doesn't seem to allow it to go to the print queue. I have switched usb cables (with the printer that I know works) and both printers are recognized by the computer in the printer preferences pane. I've tried reseting the printer system in the printer setup utility.. reloaded the drivers from HP.. etc. If anyone has a suggestion or could point me to a little help i'd be VERY grateful Best Regards.. B

    Read the article

  • Unable to view 2 local sites over network

    - by gentrobot
    I have 2 websites running on my local machine that I'd like to view from other machines on the same network. For /etc/apache2/sites-available/site1.com: <VirtualHost *:80> ServerName site1.com DocumentRoot /var/www/answers/app/webroot DirectoryIndex index.php <Directory "/var/www/answers/app/webroot"> Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> For /etc/apache2/sites-available/site1.com: <VirtualHost *:80> ServerName site2.com DocumentRoot /var/www/answers2/app/webroot DirectoryIndex index.php <Directory "/var/www/answers2/app/webroot"> Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> I have added 2 entries in the /etc/hosts file as: 127.0.0.1 site1.com 127.0.0.1 site2.com Now, when I point the browser on my machine to site1.com, it shows me the first site and pointing the browser to site2.com, it shows me the second site. However,when I type in the local IP of my machine in the browser, it always shows site2. How can I change it to switch between site1 and site2 ? Is there a way that I can view both the sites form another machine (esp. mobile devices over wireless network) ?

    Read the article

  • Apache Default/Catch-All Virtual Host?

    - by SJaguar13
    If I have 3 domains, domain1.com, domain2.com, and domain3.com, is it possible to set up a default virtual host to domains not listed? For example, if I would have: <VirtualHost 192.168.1.2 204.255.176.199> DocumentRoot /www/docs/domain1 ServerName domain1 ServerAlias host </VirtualHost> <VirtualHost 192.168.1.2 204.255.176.199> DocumentRoot /www/docs/domain2 ServerName domain2 ServerAlias host </VirtualHost> <VirtualHost 192.168.1.2 204.255.176.199> DocumentRoot /www/docs/everythingelse ServerName * ServerAlias host </VirtualHost> If you register a domain and point it to my server, it would default to everythingelse showing the same as domain3. Is that possible?

    Read the article

  • Virtual hosts and subdomains, need configuring on live, not on dev

    - by rix
    I have a query about subdomains and virtual hosts. On my dev server, I ca nset up a new virtual hosts and point it to a subdomain, ie: newdomain.mydevserver.com All is fine and I can access 'newdomain/mydevserver/com' I can also do 'host newdomain.mydevserver.com' However, when I do the same with my main server I get the server couldn't be contacted error - I'm guessing that's because i need a cname or a record for the DNS. My question is, why does it seem that i can create subdomains on my dev but not on my live server? Help appreciated, many thanks!

    Read the article

  • How do I create email addresses for all my users using my domain ([email protected])?

    - by user33012
    My client wants to create email addresses for all their dotnetnuke users using their domain. The point is to keep the user's email addresses 'private' while still allowing communication through a public email address that they can control. It's not necessary to have a full webmail interface (although that would be nice). I'm thinking it would be enough just to forward any mail on and just act as a gateway. So if an email was sent to [email protected], it would be forwarded on to the email address associated with the dotnetnuke account with username 'rwain'. Is this possible to do in a shared hosting environment? Or do I need to create some custom mail server that does a conversion of the email address and forwards it?

    Read the article

  • Does Dreamweaver subversion support branching?

    - by John Isaacks
    Adobe Dreamweaver added support for subversion in CS4. I have CS5, I am able to update and commit, they even have a very easy rollback option where you can promote any revision number to "head" but I cannot figure out anyway to branch using it. According to Adobe their subversion support is not full featured. But I cannot find any resource that stats what all is supported. So is branching one of the things not supported? (I kind of feel like whats the point without branching?) If you can do it, how?

    Read the article

  • Capture the active window to jpg using C# ?

    - by user29004
    Hello, I am creating a Windows Application that capture the all users session details. I want to track that how many users are connected with my machine using RDP session. Want to log all their activity with screen shot. So I would like to know how can I capture the active screen. I tried 'RUNAS' but its show me current screen shot where I run. I am able to use user's application data but not get Actual screen. I also tried LogonUser, CreateProcessAsUser but not getting actual point. So please let me know how can I do it ? I used 'cassia' google library to list all the current logged user. Thanks Laxmilal

    Read the article

  • What is the Optimal Server Configuration for Split-Path Testing?

    - by doug
    I am far from an expert on Apache or any server for that matter, so i apologize if this question is poorly worded, which it likely is. We have always relied on a vendor for split-path testing (aka "AB Testing"). If you're not familiar with that term, it's a form of marketing research in which you slightly modify one of your web pages (usually one nearest the point of conversion), say for instance, by changing the position of the "Buy Now" button or its color/contrast/texture, then serving one of those two pages to a given user based on random selection. By doing split-path testing ourselves, I suspect we can do it far more cheaply and increase cycle times as well. What is the optimal set-up for these tests? "Optimal" is based on the following criteria: how quickly/easily new tests can be set-up and put online; and minimal disruption to overall site performance

    Read the article

  • CSOM (Client Side Object Model) - What's new with SharePoint 2013

    - by KunaalKapoor
    SharePoint CSOMThe Client-Side Object Model or CSOM came out with SharePoint 2010. CSOM is accessible through client.svc but all client.svc calls must go through supported WFC entry points (supported entry points are .NET, Silverlight and JavaScript). So a developer would need to use client side proxy objects exposed by either a .NET assembly or a JavaScript library. Changes with SharePoint 2013REST Capabilities - Direct access to client.svcNew APIs - App ModelREST CapabilitiesOne of the most important changes to the CSOM with SharePoint 2013 is that the web service entry point of client.svc has been extended to allow direct access  via REST-Based web service calls. This is a really critical change since its going to make the SharePoint platform accessible to any other platform, opening the horizons of integration and collaboration with other REST based platforms and devices. OData (a really popular standard data access API for HTTP-based clients) is supported similar to 2010 but will be a more important aspect of SharePoint 2013 development.New API'sCSOM for SharePoint 2013 has been buffed up with several new APIs for not only SharePoint server functionality but also an API for Windows Phone applications. For a SharePoint 2010 farm most of the new APIs mentioned below are available only via server side APIs:SearchTaxonomyPublishingWorkflowUser ProfilesE-DiscoveryAnalyticsBusiness DataIRMFeedsSharePoint 2013 remote APIs being accessible through both CSOM and REST is very important to the new app model where developers can no longer run code in a SharePoint environment nor can they access the server-side APIs. So CSOM plays the savior here.Also, you can now substitute the alias '_api' in order to reference '_vti_bin/client.svc'.

    Read the article

  • How to factorize code in Unreal Kismet (i.e. "Material Function"s for Kismet)

    - by Georges Dupéron
    In the Unreal Development Kit, when using the Material Editor, one can factorize frequently-used groups of nodes by creating a Material Function (content Browser ? right-click ? new matrial function, IIRC). When defining the behaviour of some actor in Kismet, one can easily have a dozen nodes involved. If I have many actors that share the same behaviour, then I'll copy-paste these nodes, and change the variables so they point to the other actors. This leads to inconsistencies (a modification in the behaviour of an actor isn't propagated in the copy-pasted nodes), complexity (you end up with hundreds of nodes), and generally useless effort. My question is : Can I create a "kismet function", just like a material function ? Note: I'd rather avoid using UnrealScript. I don't even know where to type UnrealScripts, don't know where the documentation is and more generally don't have enough time to invest in learning UnrealScript. This "kismet function" feature must be usable by graphists (with little programming knowledge). If a (simple) script suffices to add this feature in the Kismet editor, so that one can create several "functions" without using UnrealScript, then fine, but I don't really want to have to write a script each time I want to factorize a few nodes. Thanks for any information !

    Read the article

  • How do functional languages handle a mocking situation when using Interface based design?

    - by Programmin Tool
    Typically in C# I use dependency injection to help with mocking; public void UserService { public UserService(IUserQuery userQuery, IUserCommunicator userCommunicator, IUserValidator userValidator) { UserQuery = userQuery; UserValidator = userValidator; UserCommunicator = userCommunicator; } ... public UserResponseModel UpdateAUserName(int userId, string userName) { var result = UserValidator.ValidateUserName(userName) if(result.Success) { var user = UserQuery.GetUserById(userId); if(user == null) { throw new ArgumentException(); user.UserName = userName; UserCommunicator.UpdateUser(user); } } ... } ... } public class WhenGettingAUser { public void AndTheUserDoesNotExistThrowAnException() { var userQuery = Substitute.For<IUserQuery>(); userQuery.GetUserById(Arg.Any<int>).Returns(null); var userService = new UserService(userQuery); AssertionExtensions.ShouldThrow<ArgumentException>(() => userService.GetUserById(-121)); } } Now in something like F#: if I don't go down the hybrid path, how would I test workflow situations like above that normally would touch the persistence layer without using Interfaces/Mocks? I realize that every step above would be tested on its own and would be kept as atomic as possible. Problem is that at some point they all have to be called in line, and I'll want to make sure everything is called correctly.

    Read the article

  • Have Free website Hosting with Google App Engine

    - by mickthompson
    I'm reading about Google App Engine. I'm creating a bunch of simple dynamic websites in java. I'm considering to use Google App Engine and setup my clients' website on it. In this way I've only to register a domain www.myclietdomain.com and then point that to the GoogleAppEngine application... In this way I plan to avoid hosting costs. Infact I'm paying even for hosting few static html pages... Do you think that is possible to use Google App Engine for this scope?

    Read the article

  • How do sites avoid SEO issues / legalities with subdomain unique ids?

    - by JM4
    I was looking through a few websites recently and noticed a trend I'm not sure I understand. Sites are creating unique referral URLs for customers in the form of: http://customname.site.com (If somebody were to use http://www.site.com/customname it would function the same way). I can see the sites are using 302 redirects at some point using Google Chrome then doing some sort of htaccess redirect, taking the subdomain name (customname) and applying it as a referral parameter then keeping in session during the entire process. However, there must be thousands of these custom URLs that people are typing in. How are each one of these "subdomains" not treated as separate URLs which in turn are redirected to the same page (in short, generating tons of links all pointing to the same page which Google would normally frown upon)? Additionally, the links also appear on the site themselves as clickable links so I'm not sure how these are not tracked. Similarly, the "unique" url is not indexed or cached in any Google search results. How is this capability handled? It does NOT highlight the referral aspect, but a true example of this is visiting http://sfgiants.com which does a 302 redirect to the much longer proper San Francisco Giants MLB homepage. I am wondering how SFgiants.com is not indexed (assuming that direct shortened link appears on several MLB pages)? 1 - I know these are 302 redirects, I can see this on the sites network flow. 2 - These links do in fact appear on the page itself because in some areas (for example, the bottom of the page may say: send this page to a friend! http://name.site.com/ which in turn would again redirect to something like http://www.site.com?id=name so the id value could be stored in session

    Read the article

  • Why did my mouse/trackpad suddenly stop working on my MacBook under Windows XP?

    - by Jack B Nimble
    My mouse and track pad have suddenly stopped working on my MacBook, which is running Windows XP from BootCamp. Both the USB mouse and the trackpad work fine in OS X. The device manager says that the driver is missing or corrupt on both items (they have a yellow exclamation point). I tried uninstalling the mice, and reinstalling them. Since the USB mouse is a Logitech I tried replacing the driver with one downloaded from Logitech.com. Anyone know how I can repair the mouse driver that Windows XP is using?

    Read the article

  • So my employer wants me to do less programming and focus on IT support

    - by Rich
    I was hired into a non tech company's IT department as a programmer a few years back, and after several rounds of lay offs, we're down to a skeleton crew. I've saved the company hundreds of thousands of dollars with my projects and management has been happy with them (although most of the stakeholders have since left the company). Management now wants me to limit the programming that I do and spend most of my time on IT support: putting out fires, dealing with vendors, outsourced contractors, supporting company systems, managing projects, etc. I am a little burnt out on programming since I've been pushed pretty hard for the past several years. However, I'm not sure if this is a good career move in the long run. I'm a decent programmer (and also good with databases) but not obsessed with it to the point of coding outside of work. I'm approaching my mid 30s and there's potential ageism to deal with down the line. While I'm fortunate to have survived the lay offs, it sorta feels like my job is being "dumbed down". I have both good technical skills and people skills...but it doesn't take a genius to do what I'm doing now. And my success is being increasingly linked to others' performance rather than my own... Just looking for some advice. Is it time to move on? That's not really an easy thing to do since I'd likely have to move to another area to find another comparable tech job. Should I go after another pure technical role? Or should I stay and try to make this work? People say do what you "enjoy" but it doesn't really matter to me as long as I'm getting paid. Also the ageism thing is on the horizon and could be an issue eventually. I'm making a decent (but not great) salary. Should I chase money and maximize my income while I still have a chance? Or be happy with a moderate salary and 40 hour work week?

    Read the article

  • Why is ext3 so slow to delete large files?

    - by Janis Peisenieks
    I have a server, which makes an incremental backup of a system every night. Now on saturdays, there is a full backup. But after the full backup has finished, a script kicks in, that deletes the incrementals. Now, the script sometimes breaks, and it is because the incrementals are each about 10GB files, and sometimes takes too long for the script. Now could someone explain to me, or point me in the direction of a resource, that explains why ext3 is so slow to delete files, when compared to, lets say, NTFS? I know theses are 2 completely different file systems, but I'm really interested why is there such a big difference in deletion?

    Read the article

  • "The connection has timed out" - Please help!

    - by gon
    I recently installed a fresh Ubuntu 12.04 LTS on a desktop, and the installation itself was successful (other than 'grub rescue' issue that I encountered but fixed) but this connection problem is really giving me a headache. Symptoms: 1. When I open the FireFox browser and try to connect to a website, it just hangs for a while saying "Connecting..." but eventually loads an error page "The connection has timed out". 2. It's not a browser problem (and I tried setting ipv6 thing to "true" at about:config) because running "sudo apt-get install [some-random-package]" at terminal fails ("E: Unable to locate package [package]") too. All other operations that need internet access are not working. 3. I certainly see a wired network (called "eth1") at the Network Manager, and it says "Connection Established" after disconnecting and then connecting again. I have tried almost everything that could be found from google search results still no luck. Their problems slightly differ from mine or the solutions just don't work. By the way it didn't have internet access when installing Ubuntu 12.04. (I ignored the message that I need internet to install Ubuntu) Could this be a problem? I'm sorry I don't remember if internet worked or not on the previous version of Ubuntu. :( I would really appreciate your help... I don't even know what more to do if this fails too.. Thanks!! Thanks for your comment. Here is the result of ifconfig: eth0 Link encap:Ethernet HWaddr 78:ac:c0:3d:b2:b9 inet addr:10.10.65.185 Bcast:10.10.65.255 Mask:255.255.255.0 inet6 addr: fe80::7aac:c0ff:fe3d:b2b9/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3907 errors:0 dropped:0 overruns:0 frame:0 TX packets:771 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:393118 (393.1 KB) TX bytes:73472 (73.4 KB) Interrupt:16 eth1 Link encap:Ethernet HWaddr 78:ac:c0:3d:b2:b8 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:17 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:4 errors:0 dropped:0 overruns:0 frame:0 TX packets:4 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:204 (204.0 B) TX bytes:204 (204.0 B) route -n: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.10.65.1 0.0.0.0 UG 0 0 0 eth0 10.10.65.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0 /etc/resolv.conf: # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 8.8.8.8 nameserver 8.8.4.4 nameserver 10.81.1.8 nameserver 10.1.2.10 nameserver 127.0.0.1 search yamatake.local /etc/network/interfaces: auto lo iface lo inet loopback #auto eth0 #iface eth0 inet dhcp #auto eth1 #iface eth1 inet dhcp And I'll also include the result of 'sudo lshw -C network' in case it might help: *-network description: Ethernet interface product: NetXtreme BCM5764M Gigabit Ethernet PCIe vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: 10 serial: 78:ac:c0:3d:b2:b9 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.121 duplex=full firmware=5764m-v3.35 ip=10.10.65.185 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:93 memory:fc000000-fc00ffff *-network description: Ethernet interface product: NetXtreme BCM5764M Gigabit Ethernet PCIe vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:01:00.0 logical name: eth1 version: 10 serial: 78:ac:c0:3d:b2:b8 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.121 duplex=full firmware=5764m-v3.35 latency=0 link=no multicast=yes port=twisted pair speed=100Mbit/s resources: irq:94 memory:fb000000-fb00ffff

    Read the article

  • Trouble with Collada bones

    - by KyleT
    I have a Collada file with a rigged mesh. I've read the node tags in the library_visual_scenes tag and extracted the matrix for each node and stored everything in a hierarchical bone structure. My Matrix container is "row major", so I'd store the first float of a matrix tag in the 1st row, 1st column, the second in the 1st row, 2nd column, etc. From what I gather this is the Bind Pose Matrix. After that I went through the tag and extracted the float array in the source tag of the skin tag of the controller for the mesh. I stored each matrix from this float array in their corresponding Bone as the Inverse Bind Matrix. I also extracted the bind-shape-matrix and stored it. Now I'd like to draw the skeleton with OpenGL to see if everything is working correctly before I go about skinning. I iterate once over my bones and multiply a bone's Bind Pose Matrix by it's parents and store that. After that I iterate again over the bones and multiply the result of the previous matrix multiplication by the Inverse Bind Matrix and then by the Bind Shape Matrix. The results look something like this: [0.2, 9.2, 5.8, 1.2 ] [4.6, -3.3, -0.2, -0.1 ] [-1.8, 0.2, -4.2, -3.9 ] [0, 0, 0, 1 ] I've had to go to various sources to get the little understanding of Collada I have and books about 3d transform matricies can get pretty intense. I've hit a brick wall and if you could please read through this and see if there is something I'm doing wrong, and how I'd go about getting an X,Y,Z to draw a point for each of these joints once I've calculated the final transform, I'd really appreciate it.

    Read the article

  • How to secure Apache for shared hosting environment? (chrooting, avoid symlinking...)

    - by Alessio Periloso
    I'm having problems dealing with Apache configuration: the problem is that I want to limit each user to his own docroot (so, a chroot() would be what I'm looking for), but: Mod_chroot works only globally and not for each virtualhost: i have the users in a path like the following one /home/vhosts/xxxxx/domains/domain.tld/public_html (xxxxx is the user), and can't solve the problem chrooting /home/vhosts, because the users would still be allowed to see each other. Using apache-mod-itk would slow down the websites too much, and I'm not sure if it would solve anything Without using any of the previous two, I think the only thing left is avoiding symlinking, not allowing the users to link to something that doesn't belong to them. So, I think I'm going to follow the third point but... how to efficiently avoid symlinking while still keeping mod_rewrite working?! The php has already been chrooted with php-fpm, so my only concern is about Apache itself.

    Read the article

  • How to Architect a system on AWS for scaling (with a MySQL back-end)

    - by Edan Maor
    I'm trying to understand how to architect an Amazon Web Services application. As I understand it, the whole point of using something like AWS is to make the eventual scaling easier, so I'm trying to understand how to do that. I have an instance, running off of EBS (EBS-based instance, not a regular instance). My application (a Django app) uses MySQL as a back-end. So the question is, where am I supposed to install the MySQL? Do I install it on the same instance? In which case, as far as I can tell, I can't simply create more server instances from that image. Or am I supposed to simply spin up another server as a DB server, and run off of that? Thanks for any help!

    Read the article

  • 11.10 install hangs at different places

    - by TreefrogInc
    I've been trying to install Oneiric for some time now, and I've looked everywhere for a solution to the problems I've been having. So far, I've attempted four times to install it, so now I'm up to a point of panic. So I grabbed the 11.10 x64 iso from the website, and after verifying that the md5 hash is correct, I burned that onto my last remaining clean CD. On my first attempt, everything went perfectly up to the middle of the installation, and the progress bar stopped when it said: "configuring target system." I could do everything else, only the installation seemed to have stopped. After I googled my problem, I went and used the "check disc for errors" option, which said everything was fine. Then I tried the installation again, only this time, I selected "Install Ubuntu Now" instead of the "try before installing". Again, the same problem. My second and third tries didn't even reach the installation phase. It just stopped at the 5 blinking dots and never went any further. I used the same non-rewritable cd for all the attempts, as the error check didn't show any problems and because I'm currently out of usable cds. System: Core i3 CPU @ 3.4 GHz, 500 GB HDD (250g used for Win7, 70g used for preexisting system partitions, 180g unallocated).

    Read the article

  • Ubuntu doesn't start and I can't login [migrated]

    - by Meph00
    My ubuntu 13.04 doesn't boot anymore. Eternal black screen. If I press ALT+CTRL+F1 I see that it's stucked on "Checking battery state [OK]." I'd like to try to go with sudo apt-get install gdm but I can't login on terminal tty2, tty3 etc. They correctly ask for my nickname, then they make me wait a lot, ask for password and make me wait again. After a lot of time (... a lot) the best I could achieve was visualizing "Documentantion https://help.ubuntu.com". I can never reach the point where I can give commands. Plus, during the long pauses, every 2 minutes it gives a messagge like this: INFO: task XXX blocked for more than 120 seconds. Any suggestion? Sorry for my bad english and thanks everyone for the attention.

    Read the article

< Previous Page | 582 583 584 585 586 587 588 589 590 591 592 593  | Next Page >