Search Results

Search found 19615 results on 785 pages for 'apache config'.

Page 567/785 | < Previous Page | 563 564 565 566 567 568 569 570 571 572 573 574  | Next Page >

  • Is it possible to have multiple subdomains point to the same Blogger blog?

    - by cclark
    For our application we want to have a status page which is hosted outside of the rest of our infrastructure so in case there are issues in our data center we can post updates for our users and our users will be able to access them. We registered a blog on Blogger and set it up with xyzstatus.blogspot.com and status.xyz.com. Everything seems to work fine. We need to perform some maintenance at our datacenter which will sever all connectivity so we're unable to have a redirect using nginx or apache. We'd like to do this with a short TTL CNAME DNS entry. Ideally www.xyz.com and app.xyz.com could be CNAMEd to status.xyz.com. When I setup the CNAME and go to that URL I get a Google broken robot 404 page. I figure I must need to let Google know it should associate traffic to www.xyz.com and app.xyz.com to the blog served up by status.xyz.com. But I can't see anywhere to do this in Blogger. Does anyone know if this is possible?

    Read the article

  • Limiting calls to WCF services from BizTalk

    - by IntegrationOverload
    ** WORK IN PROGRESS ** This is just a placeholder for the full article that is in progress. The problem My BTS solution was receiving thousands of messages at once. After processing by BTS I needed to send them on via one of several WCF services depending on the message content. The problem is that due to the asynchronous nature of BizTalk the WCF services were getting hammered and could not cope with the load. Note: It is possible to limit the SOAP calls in the BtsNtSvc.exe.Config file but that does not have the desired results for Net-TCP WCF services. The solution So I created a new MessageType for the messages in question and posted them to the BTS messaeg box. This schema included the URL they were being sent to as a promoted property. I then subscribed to the message type from a new orchestraton (that does just the WCF send) using the URL as a correlation ID. This created a singleton orchestraton that was instantiated when the first message hit the message box. It then waits for further messages with the same correlation ID and type and processs them one at a time using a loop shape with a timer (A pretty standard pattern for processing related messages) Image to go here This limits the number of calls to the individual WCF services to 1. Which is a good start but the service can handle more than that and I didn't want to create a bottleneck. So I then constructed the Correlation ID using the URL concatinated with a random number between 1 and 10. This makes 10 possible correlation IDs per URL and so 10 instances of the singleton Orchestration per WCF service. Just what I needed and the upper random number is a configuration value in SSO so I can change the maximum connections without touching the code.

    Read the article

  • HP Envy dv6t-7300: Disabled WiFi through button and can't enable it anymore

    - by Mateus B. Cassiano
    Well, I have a HP Envy dv6t-7300 laptop that came with a Ralink RT5390 WiFi card. Everything was working perfectly, and eventually I press the WiFi button in my keyboard to toggle the card on/off. Until today, all worked right: if the wifi was off (wifi LED amber) and I press the wifi button, after a few seconds the LED turn white and everything works. If I repeat the process, the wifi LED turn amber and the card get disabled, but now, I can't turn it on anymore. running sudo rfkill list all I get: 0: phy0: Wireless LAN Soft blocked: no Hard blocked: no 1: hp-wifi: Wireless LAN Soft blocked: no Hard blocked: yes So, I ran sudo rfkill unblock all but nothing changed. As a side note, if I run sudo ifconfig wlan0 up, the indicator LED gets white (indicating that the card was enabled), but Ubuntu still say that the card is blocked by hardware. Extra information: the card works without issues in windows and in Ubuntu installer (booting from a live CD). I'm using the card out-of-box, using the drivers already included in Ubuntu 12.10. The module rt2800pci is loaded and working fine, not blacklisted, etc, etc. The card and the button toggle worked flawlessly until today, when I toggled it off and can't turn it on anymore... The problem is back, but in a different manner: if I don't press the wifi key a few times during the grub loading, in the login screen the wifi button will be ambar (disabled), pressing it will toggle it white (enabled) or ambar (disabled) again, but ubuntu still says that the network card was disabled by hardware and doesn't connect... In other words, if I don't press the WiFi button a few times when Ubuntu is booting, it will be stuck with the "network card was disabled by hardware" message, even if the light is white (enabled). Any clue? Maybe a error in some startup script or config file?

    Read the article

  • Ubuntu 12.04 desktop, VNC viewer not refreshing screen

    - by user73279
    I've had this issues across multiple machines and multiple versions of Ubuntu desktop (all 10.04 or later). Usually it happens with an old laptop I've put Ubuntu on but now it's happening on my primary dev machine (a quad-core PC recently upgraded to Ubuntu 12.04 desktop). The problem is this - I can connect to the machine and login with the password, the initial screen looks fine but never refreshes. I can see the monitor for the machine across the room and can see the mouse move and the menus pop up but the image of the screen on the PC in front me running the VNC viewer never updates. So the mouse and keyboard commands are working. Ubuntu 12.04 Desktop Ultra VNC Viewer (also seen with RealVNC's free VNC viewer) Desktop Sharing Static IP on eth0; Dynamic ID on eth1 I think it is an Ubuntu config issue because this PC used to work just fine with 9.04, 10.04, and 11.10 (over the past couple of years). I've also had a couple of laptops that used to have this issue with older Ubuntu's but don't with 12.04. Additional info: The Win7 PC I'm trying to use to control the Ubuntu PC is connected via 2 DLink 8-port gigabit routers. The Ubuntu laptop I usually control via VNC is typically only connected to the network via wireless. The screen refresh is choppy but usable. I've repeated the issue on a Win7 laptop which was connected via ethernet and wireless.

    Read the article

  • What kind of hosting do I need?

    - by Robert Smith
    I migrated this question from serverfault. Hopefully this is the appropriate place. I have been trying to answer this question but I haven't found an specific answer to my situation. As I want to pay for what I need, I thought I could get a good answer here. I have a custom made forum (rather than a built-in forum like the ones you can find in plugins, e.g. WP-Forum or phpBB type of software) in Django. I don't want to use Apache and modwsgi because it's usually very memory-hungry and I can't afford a big server. I prefer a combination of nginx and gunicorn which I think is very efficient (maybe you can also tell me what you think about that). I'm expecting to receive 10,000 to 20,000 visits each month with 15,000 to 30,000 page impressions. I have reviewed some cloud services like Amazon EC2 or Rackspace and other more traditional services (Linodo). This site won't use videos or big images and I certainly don't need a huge amount of bandwidth (200GB would be definitely too much). I need shell access so shared hosting is out of the question. What do I need to run a website like that without problems? What about RAM? 256MB would be enough (that's the amount of RAM offered by small instances in Amazon and Rackspace)? Do you know of any alternative to those I mentioned? If you need more information to provide a useful answer, please don't hesitate to ask. By the way, I was told that Linodo is not all that different to Amazon EC2 but this website is supposed to work 24/7, so I can't take advantage of Linodo's flexibility regarding creating and deleting instances. Thanks in advance.

    Read the article

  • Trying to install Proprietory Nvidia Graphics Drivers

    - by Peter Snow
    After reading and trying many different suggestions for some hours, I returned to this how-to: https://help.ubuntu.com/community/BinaryDriverHowto/Nvidia The first problem I encounter is how to identify which of the listed drivers support my Nvidia GEForce 630M graphics card. Following the links doesn't really help, since it is not stated there either (except where support for a new driver was added later which is explicitly stated, but the original devices covered are not). However, even if I knew, if it doesn't appear in the 'Additional Drivers' dialogue (see below), how will I install it? Second Issue: The article goes on to say that available drivers for my hardware are usually listed in 'Additional Drivers'. In my case, they aren't. Unfortunately, it doesn't tell me how to correct that or work around it? I've checked the bios and there is no way offered there to disable the integrated graphics, only the Nvidia graphics. I've also tried each available option in this: $ sudo update-alternatives --config i386-linux-gnu_gl_conf My system is an Acer Aspire 4752G bought May 2012. I'm running Ubuntu 12.04LTS. uname -a : 3.2.0-38-generic-pae #61-Ubuntu SMP Tue Feb 19 12:39:51 UTC 2013 i686 i686 i386 GNU/Linux It's 64bit hardware but I installed 32bit OS for greater software compatibility. Running $ sudo tail -fn 500 /var/log/Xorg.0.log | grep '(EE)' returns" (WW) warning, (EE) error, (NI) not implemented, (??) unknown. [ 28.886] (EE) Failed to initialize GLX extension (Compatible NVIDIA X driver not found) The reason for wanting the proprietor y drivers is because my laptop comes with 3D accelerated graphics adaptor and so rather than confining myself to struggling with the on-board graphics, I would rather use it. I also want to experiment with using it for bitmining (which uses the GPU's for computing power).

    Read the article

  • One monitor getting spilled over into other monitor: how to do a 100% reset of gnome graphics configuration

    - by Paul Nathan
    I had to kill a VMWare process and afterwards, my monitor's configuration is buggy. I have 2 monitors in a side-by-side configuration. My right-hand monitor is the secondary monitor. Upon its right-hand side there are about 50 pixels showing from the left side of the lefthand monitor (ie, as if it was wrapped around). Further, my mouse clicks are registering as about 50 pixels sideways from where they should be. It's as if those 50 pixels between monitors got gobbled. What have I done? I've reset the screen configuration in multiple ways, using xrandr, multiple monitors app, etc. This persists in different side-by-side configurations, and also persists with another user. It does not occur with XFCE. Resetting the Window manager with the Compiz reset WM app does not fix this. I've concluded the burn-to-the-ground approach is likely the best, and would like to do a 100% reset of my graphics settings. It's an Intel integrated chipset. Removing ~/.config/monitors.xml did not work. Also, interestingly, the mouse can mouse-over the 50 errant pixels on the rhs of the right-hand monitor. I hypothesize that it's a compositing problem occurring at the layer where the background, selection, and clicks are caught. Also, inverting the right-hand monitor removes the issue, but renders the screen unusable. Even more datapoints: This happens in KDE as well Sometimes logging into Gnome and running xrandr --output DVI1 --auto resets it, but the issue immediately reappears when I press alt-tab. With Compiz Application Switch turned on, the workspace is 'pushed back' a bit, and the slice on the RHS follows it as well. I'm wondering if it's a flaw in the compiz workspace compositing configuration. I suspect the error was in the compositing configuration. I installed 11.10.

    Read the article

  • "The protocol 'net.msmq' is not supported."

    - by Randolpho St. John
    OMG, a new lesson! Will wonders never cease? So I ran into an interesting issue setting up a WCF service to consume an MSMQ queue. I won't bother you with the details of how to actually build a WCF/MSMQ service; there are plenty of tutorials on the subject. I want to share with you an interesting error that I ran into and the surprisingly simple fix. The error occurs when attempting to generate a Service Reference or even simply browsing to the WSDL of your WCF/MSMQ service in the form of a YSOD with the following error: "The protocol 'net.msmq' is not supported." After a lot of Googling on the subject turning up plenty of questions with the same error but no answers. So I went digging into some application level config files on a server that already had a WCF/MSMQ service successfully set up by the network admin, and the answer was amazingly simple: If you are hosting an MSMQ/WCF service in IIS, you have to tell IIS to allow net.msmq protocol. It's in the advanced settings for the application or site in which you are hosting the service. .... aaaand, that's it.

    Read the article

  • How to schedule time-of-day upgrades

    - by Richard
    Hello, I'm responsible for about 30 Ubuntu computers at a private K-8 school. We have only a 3Mbps internet connection serving the entire campus, and I would like to ensure that updates are done in the middle of the night - so that daytime tasks are not slowed down. I'm using Ubuntu 10.04, and have set all computers to download and install security updates via the update manager. I have also installed cron-apt, and modified the config file to stagger the start times of the upgrades from about 10pm to 4am local time. HOWEVER - this morning I arrived at the school at 7:30am and all the computers were busy downloading a large security based update. Needless to say, all internet activity was slowed to a crawl (for the next 2 hours), and the computer users were very very upset. This was the event I'm trying so hard to prevent. It seems that my scheme to ensure middle of the night downloads failed, and I'm not sure why. I've also tried some schemes using unattended-upgrades & crontab, but there always seemed to be something scheduling upgrades to occur in addition to the ones I try to force at middle of the night. Is there a sure fire way to absolutely positively guarantee that updates will occur only at one specific time? It would be nice if the update manager just had a drop down menu to specify a designated time. Thanks in advance for any help you can give me.

    Read the article

  • Weird 302 Redirects in Windows Azure

    - by Your DisplayName here!
    In IdentityServer I don’t use Forms Authentication but the session facility from WIF. That also means that I implemented my own redirect logic to a login page when needed. To achieve that I turned off the built-in authentication (authenticationMode="none") and added an Application_EndRequest handler that checks for 401s and does the redirect to my sign in route. The redirect only happens for web pages and not for web services. This all works fine in local IIS – but in the Azure Compute Emulator and Windows Azure many of my tests are failing and I suddenly see 302 status codes where I expected 401s (the web service calls). After some debugging kung-fu and enabling FREB I found out, that there is still the Forms Authentication module in effect turning 401s into 302s. My EndRequest handler never sees a 401 (despite turning forms auth off in config)! Not sure what’s going on (I suspect some inherited configuration that gets in my way here). Even if it shouldn’t be necessary, an explicit removal of the forms auth module from the module list fixed it, and I now have the same behavior in local IIS and Windows Azure. strange. <modules>   <remove name="FormsAuthentication" /> </modules> HTH Update: Brock ran into the same issue, and found the real reason. Read here.

    Read the article

  • How To Export/Import a Website in IIS 7.x

    - by Tray Harrison
    IIS 6 had a great feature called ‘Save Configuration to a File’ which would allow you to easily export a website’s configuration, to be later used to import either on the same server or another box.  This came in handy anytime you wanted to duplicate a site in order to do some testing without impacting the existing application.  So naturally, Microsoft decided to do away with this feature in IIS 7. The process to export/import a site is still fairly simple, though not as obvious as it was in previous versions.  Here are the steps: 1. Open a command prompt and navigate to C:\Windows\System32\inetsrv and run the following command: appcmd list site /name:<sitename> /config /xml > C:\output.xml So if you were wanting to export a website named EAC, you would run the following: If you’ll be setting up another copy of the site on the same server, you’ll now need to edit the output.xml file before importing it.  This is necessary in order to avoid conflicts such as bindings, Site ID, etc.  To do this, edit the XML and change the values.  Go ahead and make a copy of the home directory, and rename it to whatever folder name you specified in the output – /EAC2 in this example.  If you decide to change the app pool, make sure you go ahead and create the new app pool as well. Once these edits have been made, we are now ready to import the site.  To do that run: appcmd add sites /in < c:\output.xml So for our example it would look like this: That’s it.  You should now see your site listed when opening up Inet Manager.  If for some reason the site fails to start, that’s probably because you forgot to create the new app pool or there is a problem with one of the other parameters you changed.  Look at the System log to identify any issues like this.

    Read the article

  • Enterprise Instrumentation: The 'sessionName' parameter of value 'TraceSession' is not valid

    - by Michael Freidgeim
    We are still using Enterprise Instrumentation(that was created during .Net 1.1 time)In new Server 2008 environment and IIS 7 we have the following errors:The 'sessionName' parameter of value 'TraceSession' is not valid. A trace session of this name does not exist in the TraceSessions configuration file for Windows Trace Session Manager service. Ensure that a session of this name exists in the TraceSessions configuration file and that the Windows Trace Session Manager service is started.   at Microsoft.EnterpriseInstrumentation.EventSinks.TraceEventSink..ctor(IDictionary parameters, EventSource eventSource)   --- End of inner exception stack trace ---   at System.RuntimeMethodHandle._InvokeConstructor(IRuntimeMethodInfo method, Object[] args, SignatureStruct& signature, RuntimeType declaringType)   at System.Reflection.RuntimeConstructorInfo.Invoke(BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)   at System.RuntimeType.CreateInstanceImpl(BindingFlags bindingAttr, Binder binder, Object[] args, CultureInfo culture, Object[] activationAttributes)   at Microsoft.EnterpriseInstrumentation.EventSinks.EventSink.CreateNewEventSinks(DataRow[] eventSinkRows, EventSource eventSource)I’ve seen the same errors on development Win7 machines when using IIS. It seems not a problem on Cassini.I've checked ,that Windows Trace Session Manager Service has started and The file C:\Program Files (x86)\Microsoft Enterprise Instrumentation\Bin\Trace Service\TraceSessions.config has corresponding entry<?xml version="1.0" encoding="utf-8" ?><configuration >                <defaultParameters minBuffers="4" maxFileSize="10" maxBuffers="25" bufferSize="20" logFileMode="sequential" flushTimer="3" />                <sessionList>                                 <session name="TraceSession" enabled="false" fileName="C:\Program Files (x86)\Microsoft Enterprise Instrumentation\Bin\Trace Service\Logs\TraceLog.log" />                </sessionList></configuration>The errors still continue, but I was able to disable  the parameter in  eventSink configuration   <eventSink name=" traceSink" description=" Outputs events to the Windows Event Trace." type ="Microsoft.EnterpriseInstrumentation.EventSinks.TraceEventSink ">                <!-- MNF disabled parameter to  avoid error "The 'sessionName' parameter of value 'TraceSession' is not valid"                      < parameter name ="sessionName " value ="TraceSession " />                    -->    </ eventSink>Related old post http://bytes.com/topic/net/answers/104761-enterprise-instrumentation-windows-trace-session-managerOne day I wish to replace all EnterpriseInstrumentation calls with NLog.

    Read the article

  • PHP-based indexing and search implementation

    - by Chris
    Is there such thing? I designed a while back a rudimentary form based app for my users. We receive from our suppliers hardware manufacturing data in XML files: file name is made of eleven fields separated by tildes, with each field having its own meaning. R&D guys wanted to be able to search each field of the file names so I used regex() with decent results. Problem is that we have now in the upwards of 2.5 million files. And my app can't hack it anymore. I looked at Apache Lucene & Solr. Though it seemed like the best solution to my problem, the fields in the filenames are not peers to the file content. Big no-no with Solr. What is the best way to implement a PHP app with indexing and search capability with such large number of files? Do I have to buy Zend and use Zend_Search? Is it the only way? Thanks for your input.

    Read the article

  • Only one user can connect to Ubuntu samba server

    - by StaticMethod
    I setup a samba server on 12.04 LTS, and it works great for one user but not the others. I am trying to map a network drive from a windows 7 laptop. I can successfully authenticate with one user, but the other two both get "Access is denied" errors. Here is my smb.conf file. [global] server string = %h server (Samba, Ubuntu) map to guest = Bad User obey pam restrictions = Yes pam password change = Yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%m max log size = 1000 dns proxy = No usershare allow guests = Yes panic action = /usr/share/samba/panic-action %d idmap config * : backend = tdb [printers] comment = All Printers path = /var/spool/samba create mask = 0700 printable = Yes print ok = Yes browseable = No [print$] comment = Printer Drivers path = /var/lib/samba/printers [share] comment = Ubuntu File Server Share path = /srv/share read only = No create mask = 0755 I know that the service is successfully reading from the /etc/passwd file because if I change the Linux password for the user that works, I have to use the new password when I connect. I changed all the users so they are all members of the same groups (all three users are admins anyway). I only ever have one user connected at a time. Here are the permissions on the shared folder /srv$ ls -l drwxrwxrwx 1 nobody nogroup 16 Feb 22 17:05 share Any ideas?

    Read the article

  • VCS strategy with TeamCity and CI

    - by Luke Puplett
    I'm planning a strategy which seeks to allow automated deployment of a website codebase into QA and production on check-in. We're using the fabulous TeamCity. We want to control release to live production; i.e. not have every check-in on Trunk go live. So my plan is to use Trunk as QA. Committing to Trunk triggers deployment to QA. I will then have a Production branch which also triggers deployment on commit, to the live site. The idea is simply that Trunk represents the mainline codebase but it hasn't gone live yet. We can branch features and do daily pulls from Trunk into those feature branches as per normal and merge/re-integrate into Trunk when we're happy for it to go to QA. When the BAs give the nod, we then smash a bottle of champagne and merge Trunk to Production and out she goes. I've never seen it done like this. Other greenfield CI strategies involve hiding features and code from production via config - this codebase can't cope with that - or just having CI on QA and taking cuts and manually pushing to live. Does my plan sound alright?

    Read the article

  • Sound card not detected in 13.04

    - by Ganessh Kumar R P
    I have a problem with my sound card. I don't have volume up or down option anywhere. In the setting -> Sound I don't have any card detected. But when I run the command sudo aplay -l, I get the following output **** List of PLAYBACK Hardware Devices **** Failed to create secure directory (/home/ganessh/.config/pulse): Permission denied card 0: MID [HDA Intel MID], device 0: STAC92xx Analog [STAC92xx Analog] Subdevices: 0/1 Subdevice #0: subdevice #0 card 1: NVidia [HDA NVidia], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 card 1: NVidia [HDA NVidia], device 7: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 card 1: NVidia [HDA NVidia], device 8: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 card 1: NVidia [HDA NVidia], device 9: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 And the command lspci -v | grep -A7 -i "audio" outputs 00:1b.0 Audio device: Intel Corporation 5 Series/3400 Series Chipset High Definition Audio (rev 06) Subsystem: Dell Device 02a2 Flags: bus master, fast devsel, latency 0, IRQ 48 Memory at f0f20000 (64-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: snd_hda_intel 00:1c.0 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 1 (rev 06) (prog-if 00 [Normal decode]) -- 02:00.1 Audio device: NVIDIA Corporation GF106 High Definition Audio Controller (rev a1) Subsystem: Dell Device 02a2 Flags: bus master, fast devsel, latency 0, IRQ 17 Memory at d3efc000 (32-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: snd_hda_intel 07:00.0 Network controller: Intel Corporation Ultimate N WiFi Link 5300 So, I assume that the drivers are properly installed but still I don't get any option in the settings or volume control. The same card used to work well back in 2010 versions(04 and 10) Any help is appreciated. Thanks

    Read the article

  • How do I clip an image in OpenGL ES on Android?

    - by Maxim Shoustin
    My game involves "wiping off" an image by touch: After moving a finger over it, it looks like this: At the moment, I'm implementing it with Canvas, like this: 9Paint pTouch; 9int X = 100; 9int Y = 100; 9Bitmap overlay; 9Canvas c2; 9Rect dest; pTouch = new Paint(Paint.ANTI_ALIAS_FLAG); pTouch.setXfermode(new PorterDuffXfermode(Mode.SRC_OUT)); pTouch.setColor(Color.TRANSPARENT); pTouch.setMaskFilter(new BlurMaskFilter(15, Blur.NORMAL)); overlay = BitmapFactory.decodeResource(getResources(),R.drawable.wraith_spell).copy(Config.ARGB_8888, true); c2 = new Canvas(overlay); dest = new Rect(0, 0, getWidth(), getHeight()); Paint paint = new Paint();9 paint.setFilterBitmap(true); ... @Override protected void onDraw(Canvas canvas) { ... c2.drawCircle(X, Y, 80, pTouch); canvas.drawBitmap(overlay, 0, 0, null); ... } @Override 9public boolean onTouchEvent(MotionEvent event) { switch (event.getAction()) { case MotionEvent.ACTION_DOWN: case MotionEvent.ACTION_MOVE: { X = (int) event.getX(); Y = (int) event.getY();9 invalidate(); c2.drawCircle(X, Y, 80, pTouch);9 break; } } return true; ... What I'm essentially doing is drawing transparency onto the canvas, over the red ball image. Canvas and Bitmap feel old... Surely there is a way to do something similar with OpenGL ES. What is it called? How do I use it? [EDIT] I found that if I draw an image and above new image with alpha 0, it goes to be transparent, maybe that direction? Something like: gl.glColor4f(0.0f, 0.0f, 0.0f, 0.01f);

    Read the article

  • Dual booted Windows 7 freezes after login screen

    - by Cathal
    First-time Linux user, using a Packard Bell Easy Note TS laptop. My problem arose after I dual boot installed Ubuntu 12.04 on Windows 7 via WUBI. I backed up all my data, and reinstalled Windows from factory settings on the recovery partition. When I first tried to install Ubuntu I mistakenly closed the lid at the start of the installation, stopping it. After that I rebooted, and my second installation attempt went without a hitch. Ubuntu works perfectly, the data on the partitions seem to be fine. My problem is I can't log back in to Windows 7. After selecting it in GRUB, and then in the Windows 7/ WUBI choice on boot, it loads up perfectly til the user log in screen. After the password is inputted, it stalls on the "Welcome" busy screen. This happens in Safe mode as well. Startup repair can't find a problem and neither can CHKDSK. System restore and Last known good config have no effect either. If anyone could help me out, I'd be real grateful. edit in response to the question below, since I don't know how to comment: Windows was installed first and its partitions are the first on the list. Should I move the windows partitions to after the Linux ones on the disk? Thanks for your help.

    Read the article

  • custom domain point to tumblr blog

    - by Julius
    My domain mydomain.com is registered with godaddy. I wish to host my tumblr blog on this domain with nearlyfreespeech.net hosting. My active nameservers at godaddy already point to my authoritative ones at NFS.net which is working. However i'm baffled of the correct configuration to set to point to my Tumblr. Preferably id like (A) my domain http://mydomain.com to host the blog and have http://www.mydomain.com redirect also to http://mydomain.com If this is too difficult my next preference is (B) to have http://www.mydomain.com host the blog whilst http://mydomain.com redirects to http://www.mydomain.com My 3rd preference is to have (C) a sub-domain like http://tumblr.mydomain.com or http://tumblr.mydomain.com to host the blog and i guess have http://mydomain.com and http://www.mydomain.com both redirect to it. I've tried having two aliases mydomain.com and www.mydomain.com pointing to my permanent NFS ip at mydomain.nfshost.com and when i try to add: (1) an A record pointing mydomain.com to the ip 66.6.44.4 as per Tumblr's instructions it tells me i already have the bare domain as an alias so i cant do that. (2) the A record on the www.mydomain.com alias. I can do this with either www.mydomain.com set as an alias or not. But when i tried this with mydomain.com set as the canonical name the result when visiting either mydomain.com or www.mydomain.com was them both continually redirecting to eachother until an error was thrown. So i was wondering if there is a ninja that could save me some hairpulling and tell me the correct way to config A, or else B, or else C.

    Read the article

  • Is browser and bot whitelisting a practical approach?

    - by Sn3akyP3t3
    With blacklisting it takes plenty of time to monitor events to uncover undesirable behavior and then taking corrective action. I would like to avoid that daily drudgery if possible. I'm thinking whitelisting would be the answer, but I'm unsure if that is a wise approach due to the nature of deny all, allow only a few. Eventually someone out there will be blocked unintentionally is my fear. Even so, whitelisting would also block plenty of undesired traffic to pay per use items such as the Google Custom Search API as well as preserve bandwidth and my sanity. I'm not running Apache, but the idea would be the same I'm assuming. I would essentially be depending on the User Agent identifier to determine who is allowed to visit. I've tried to take into account for accessibility because some web browsers are more geared for those with disabilities although I'm not aware of any specific ones at the moment. The need to not depend on whitelisting alone to keep the site away from harm is fully understood. Other means to protect the site still need to be in place. I intend to have a honeypot, checkbox CAPTCHA, use of OWASP ESAPI, and blacklisting previous known bad IP addresses.

    Read the article

  • Is Akka a good solution for a concurrent pipeline/workflow problem?

    - by herpylderp
    Disclaimer: I am brand new to Akka and the concept of Actors/Event-Driven Architectures in general. I have to implement a fairly complex problem where users can configure a "concurrent pipeline": Pipeline: consists of 1+ Stages; all Stages execute sequentially Stage: consists of 1+ Tasks; all Tasks execute in parallel Task: essentially a Java Runnable As you can see above, a Task is a Runnable that does some unit of work. Tasks are organized into Stages, which execute their Tasks in parallel. Stages are organized into the Pipeline, which executes its Stages sequentially. Hence if a user specifies the following Pipeline: CrossTheRoadSafelyPipeline Stage 1: Look Left Task 1: Turn your head to the left and look for cars Task 2: Listen for cars Stage 2: Look right Task 1: Turn your head to the right and look for cars Task 2: Listen for cars Then, Stage 1 will execute, and then Stage 2 will execute. However, while each Stage is executing, it's individual Tasks are executing in parallel/at the same time. In reality Pipelines will become very complicated, and with hundreds of Stages, dozens of Tasks per Stage (again, executing at the same time). To implement this Pipeline I can only think of several solutions: ESB/Apache Camel Guava Event Bus Java 5 Concurrency Actors/Akka Camel doesn't seem right because its core competency is integration not synchrony and orchestration across worker threads. Guava is great, but this doesn't really feel like a subscriber/publisher-type of problem. And Java 5 Concurrency (ExecutorService, etc.) just feels too low-level and painful. So I ask: is Akka a strong candidate for this type of problem? If so, how? If not, then why, and what is a good candidate?

    Read the article

  • Facebook Like javascript related to Time Spent Downloading a page Increase in GWT?

    - by donaldthe
    Hi, I installed the Facebook Like button Javascript version on my website on December 15th. Take a look at this report from Google Webmaster Central. Crawl stats Googlebot activity in the last 90 days The crawl stats are from Googlebot which as far as I know doesn't execute Javascript. Could the Facebook Like Javascript code, "The XFBML version" be related to large spike in Time spent downloading a page? (By the way the huge spike in November was caused by a mistake where every image request was getting a 301.) I'm not sure what caused the spike to go down by half somewhere in December. It may have been related to a faulty setting in web.config. I'm at a loss as to what I can do about this or even how to tell if this is my problem or Googlebots crawl problem. Here is the Facebook code I am using to create the like button. It is right after the opening body tag <div id="fb-root"></div> <script> window.fbAsyncInit = function() { FB.init({appId: 'xxxxx', status: true, cookie: true, xfbml: true}); }; (function() { var e = document.createElement('script'); e.async = true; e.src = document.location.protocol + '//connect.facebook.net/en_US/all.js'; document.getElementById('fb-root').appendChild(e); }()); ` and this creates the like box: <fb:like show_faces="false"></fb:like> If the Javascript can't be the problem any ideas on where to start looking would be appreciated.

    Read the article

  • I receive the error 'grub-install /dev/sda failed' while attempting to install Ubuntu as the computer's only OS.

    - by Liath
    I am attempting to install Ubuntu on a box which was previously running Windows 7. I have also experienced the dreaded "Unable to install GRUB" error. I am not attempting to dual boot. I have previously run a Windows boot disk and removed all existing partitions. If I run the Ubuntu 12.04 install CD and click install after the config screens, I get the error Executing 'grub-install /dev/sda' failed. This is a fatal error. (It is the same error as this question: Unable to install GRUB) All the questions I've read while looking for a solution are related to dual boot. I'm not interested in dual boot, I'm after a clean out the box Ubuntu install. How can I achieve this? (For my sanity, please use very simple instructions when responding. I don't claim to have any talent either for linux or as a sysadmin) Additional details copied from comments dated: 2012-05-29 ~15:19Z After booting from the CD, clicking Try Ubuntu, and then sudo fdisk /dev/sda I get fdisk: unable to seek on /dev/sda: Invalid argument sudo fdisk /dev/sdb gives Device contains neither a valid DOS partiion table, nor Sun, SGI or OSF disklabel. Building a new DOS disklabel with disk identifier 0x15228d1d. Changes will remain in memory only until you decide to write them. After that of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite). Command (m for help): I should add the Live CD desktop is graphically bad. I've got missing parts of programs and the terminal occasionally reflects to the bottom of the screen. But I can't imagine this is related.

    Read the article

  • Unity in 12.10 comes up behind other windows

    - by ams
    I've just upgraded from 12.04 to 12.10. For the most part, everything works fine, but I have a few small problems with Unity, or maybe Compiz. When I hit the Super key, or click on the dash launcher, the dash sometimes comes up behind the other windows on the screen. As you can imagine, this makes it somewhat tricky to use. Once it has started coming up behind, no amount of trying again will convince it to come back to the front. Possibly related, the Alt-Tab switcher doesn't show either. It maybe that there isn't one, or maybe that's behind also? Alt-Tab does switch the windows, but there's no visual indicator. When I hit Super-W, the windows do all do the zoom thing, but it's slow and juddery where it used to be smooth in 12.04. I'm using the standard "radeon" driver, same as before, with a triple-head monitor setup (and that works fine). I've not tried the proprietary drivers as I've previously found the multi-monitor support much weaker than the default driver, but maybe that's the way to go now? Video play fine. Even WebGL seems ok. Do other see this problem? Is it a bug? Or have I just got some left-over config from 12.04 in the way?

    Read the article

  • How to do a 3-tier using PHP [closed]

    - by Ric
    I have a requirement from a client for my PHP Web application to be 3-tier. For example, I would have a web server on Apache in the DMZ, but it should NOT contain any DB connections. It should connect to a Middle server that would host the business objects but be behind the firewall. Then those objects connect to my SQL cluster on another server. I have actually done this using .NET, but I am not sure how to setup my stack using PHP. I suppose I could have my UI front tier call the middle tier using REST based web services if I create my middle tier as a second web server, but this seems overly complex. The main reason for this is advanced security: we can not have any passwords on the DMZ first tier web server. The second reason is scalability - to have multiple server on different tiers that can handle the requests. The Last reason is for deployment - it is easier if I can take one set of servers offline for testing before putting them back in production. Is there a open source project that shows how to do this? The only example I can find is the web server hosting files from a shared drive on another machine (kind of how DotNetNuke pretends to be 3-tier), but that is NOT secure.

    Read the article

< Previous Page | 563 564 565 566 567 568 569 570 571 572 573 574  | Next Page >