Search Results

Search found 31 results on 2 pages for 'leachy peachy'.

Page 1/2 | 1 2  | Next Page >

  • How to setup the Mac OS X Terminal so it's *just peachy*?

    - by kch
    Hi all, My Terminal is awesome, has every detail just right (for me anyway), and now I'm setting up a few new macs around here and I have no idea whatsoever how to get their terminals to a pretty state. My user account is rather old, has been migrated over many OS X releases and machines, so my Terminal setup has grown rather organically over the years. What I need is a recipe to start from scratch, so 1) I know what I've done, and 2) I can reproduce it anywhere. Things I'm looking for: Full UTF8 support. Setting LC_*, displaying characters correctly, accepting input… I hear this got much easier in 10.5, maybe it all works out of the box now? Setup of OS X-style keyboard text navigation (option-arrows, etc) How you particularly handle meta-key support? (other than ESC'ing your way around) Other things to help our n00bs get around in the shell, such as: List of useful default key bindings (^A, ^D, etc…) Mac-specific .profile, .inputrc goodness Mac-specific tools such as pbpaste & pbcopy, Open Terminal Here, etc If at all possible, a list of files to copy over to another machine that encompasses all the changes made to tune the Terminal. (dotrc files, plists, etc) And, well, anything else really. Just keep the scope on the Mac OS X Terminal application, rather than general unix setup and tools. I think a collection of incomplete answers would be a good start. Post one or two things you remember having done, we'll vote them up, and after a few days I'll try to compile it all into a summary answer.

    Read the article

  • UIImagePickerControllerDelegate Returns Blank "editingInfo" Dictionary Object

    - by Leachy Peachy
    Hi there, I have an iPhone app that calls upon the UIImagePickerController to offer folks a choice between selecting images via the camera or via their photo library on the phone. The problem is that sometimes, (Can't always get it to replicate.), the editingInfo dictionary object that is supposed to be returned by didFinishPickingImage delegate message, comes back blank or (null). Has anyone else seen this before? I am implementing the UIImagePickerControllerDelegate in my .h file and I am correctly implementing the two delegate methods: didFinishPickingImage and imagePickerControllerDidCancel. Any help would be greatly appreciated. Thank you in advance! Here is my code... my .h file: @interface AddPhotoController : UIViewController <UIImagePickerControllerDelegate, UINavigationControllerDelegate> { IBOutlet UIImageView *imageView; IBOutlet UIButton *snapNewPictureButton; IBOutlet UIButton *selectFromPhotoLibraryButton; } @property (nonatomic, retain) UIImageView *imageView; @property (nonatomic, retain) UIButton *snapNewPictureButton; @property (nonatomic, retain) UIButton * selectFromPhotoLibraryButton; my .m file: @implementation AddPhotoController @synthesize imageView, snapNewPictureButton, selectFromPhotoLibraryButton; - (IBAction)getCameraPicture:(id)sender { UIImagePickerController *picker = [[UIImagePickerController alloc] init]; picker.delegate = self; picker.sourceType = UIImagePickerControllerSourceTypeCamera; picker.allowsImageEditing = YES; [self presentModalViewController:picker animated:YES]; [picker release]; } - (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingImage:(UIImage *)image editingInfo:(NSDictionary *)editingInfo { NSLog(@"Image Meta Info.: %@",editingInfo); UIImage *selectedImage = image; imageView.image = selectedImage; self._havePictureData = YES; [self.useThisPhotoButton setEnabled:YES]; [picker dismissModalViewControllerAnimated:YES]; } - (void)imagePickerControllerDidCancel:(UIImagePickerController *)picker { [picker dismissModalViewControllerAnimated:YES]; }

    Read the article

  • Display random image when page loads without utilizing onload in the body tag

    - by Peachy
    I'm trying to create a fairly simple piece of JavaScript that displays a random image from an array each time the page loads. I need to figure out a way to get this running without adding code to the body tag. Is there a way to accomplish this without, say, an onload funtion placed in the body tag? Here's what I have that relies on the onLoad: ImageSwitch=new Array(); ImageSwitch[0]='1.jpg'; ImageSwitch[1]='2.jpg'; ImageSwitch[2]='3.jpg'; ImageSwitch[3]='4.jpg'; function swapImage() { document.getElementById("theImage").setAttribute("src", ImageSwitch[ Math.round(Math.random()*3)]) } Any alternative ideas to accomplish this? Thanks.

    Read the article

  • Using apt-get from Canada

    - by Advant Edge
    I am using Ubuntu Server 12.04 LTS, and until yesterday, everything was quite peachy. I have installed several packages (MySQL, apache2) and to my knowledge have those configured correctly. Upon configuration of phpMyAdmin, I found that I was missing the directory path /etc/phpmyadmin, which got me thinking about the install. I am new to Ubuntu, so I guess I missed the message telling me that I did not download phpMyAdmin successfully. Anyway, trying to use apt-get yesterday/today results in "Failed to fetch..." messages, even if just to run sudo apt-get update. Some notable details: ···no GUI, command line only (sudo apt-get gksu fails, go figure) ···can ping 4.2.2.2, so I know the internet is out there (somewhere) ···this is a dedicated computer, using Samba to share files with Windows, which does work ···attempted to edit my /sources.list file, for various American/Canadian mirror, to no effect ···ensured I have correct DNS settings in /etc/networks/interfaces I'm not sure where along the way it happened, but I seem to have lost connections to repositories... :) Any advice (including GO BACK TO WINDOWS) is appreciated.

    Read the article

  • Default Webcam Driver Issues

    - by Omegaclawe
    I'm having troubles getting my monitor-attached webcam (ASUS VK248H) to install on my new computer. On the old computer, it was a matter of not using a USB 3.0 port, but I can't get anything to work on the new one. I have tried all manner of uninstalling/reinstalling the driver and resetting the computer, as well as literally every USB port on the computer (14 in total). It's not that windows isn't recognizing the device; it most certainly is. However, comparing it to the old computer's driver details, on the new computer, it is not using the ksthunk.sys driver in addition to the usbvideo.sys driver, like on the old (working) computer. Naturally, I figured the way ahead was to simply get this other driver to work with the hardware, but haven't really found out a way to do that. Does anyone know of a way I can force it to use ksthunk.sys? It seems rather difficult to get it to install anything when Windows is feeling that everything is peachy.

    Read the article

  • Ubuntu 12.04 Server ping gateway responds with destination host unreachable

    - by blckblttkd
    I consider myself fairly avid with Ubuntu and Linux, but this one has me stumped. I built up a Xen Server using Ubuntu 12.04 as the base operating system. It has multiple domUs running on it. My home network has a statically defined network where I got all the network connectivity going peachy. The server was moved to a permanent home this morning. So, the network configuration on the main system had to change. Again, another static network, but now I can't ping the upstream gateway from the host. As the VMs use this NIC over a bridge, they too are broken. Ping responds with "destination host unreachable." I simplified the networking down to a simple static network as seen below (no bridge or anything) just to get it to work. Here's the contents of my /etc/network/interfaces file: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 216.7.188.228 gateway 216.7.188.225 netmask 255.255.255.240 broadcast 216.7.188.255 network 216.7.188.0 dns-nameservers 8.8.8.8 8.8.4.4 Here's the contents of route -n 0.0.0.0 216.7.188.225 0.0.0.0 UG 100 0 0 eth0 216.7.188.224 0.0.0.0 255.255.255.240 U 0 0 0 eth0 And the results of pinging the gateway: PING 216.7.188.225 (216.7.188.225) 56(84) bytes of data. From 216.7.188.228 icmp_seq=1 Destination Host Unreachable From 216.7.188.228 icmp_seq=1 Destination Host Unreachable From 216.7.188.228 icmp_seq=1 Destination Host Unreachable Again, this worked in one network flawlessly (obviously with different parameters in the interfaces file). I did try using eth1 (as there are two NICS on the server (in case the MAC address got flipped on bootup). No success there. Yes, the cable is in the right port now :) Any thoughts? I appreciate the help!

    Read the article

  • Duplicate a Drupal installation from one server to another

    - by irot
    Hello. I have been developing a Drupal 6 site on my PC using XAMPP. I'm done now, and everything looks peachy. Problem is, I need to put all my content (including custom modules and themes) up onto a staging server which only has a fresh Drupal 6 install on it. I can't imagine having to set up all my custom content types and whatnot all over again on the staging server. So I ask, how does one go about doing what I need to do? Which is essentially duplicating my Drupal install from my PC, to the staging server. The staging server is running Linux, and I develop on a Windows PC, if that helps. Thanks in advance.

    Read the article

  • How do I fire an asynchronous call in asp classic and ignore the response?

    - by Hexate
    Here's the gist: I have a call I want to make in asp, and I do not care about the response. I just want to fire the call and I do not want the page to wait for the response. According to the documentation, it should look something like this: dim xmlhttp : set xmlhttp = Server.CreateObject("MSXML2.ServerXMLHTTP") xmlhttp.Open "POST", url, true '' setting the 'asynchronous' option to 'true' xmlhttp.setRequestHeader "Content-Type", "application/soap+xml; charset=utf-8" xmlhttp.setRequestHeader "Content-Length", Len(XMLData) xmlhttp.send XMLData This works peachy when calling synchronously, but when I flip the ansynchronous option to 'true', nothing fires. What I can gather from the internet is that users do something like the following: While xmlhttp.readyState <> 4 xmlhttp.waitForResponse 1000 Wend Am I crazy in that this does not really seem like an asynchrous call anymore though if you are waiting for a response? putting the line xmlhttp.waitForResponse 1 right after the send will cause the request to fire, but again, I don't want to wait a second. Any thoughts?

    Read the article

  • How does PHP's list function work?

    - by Jacob Relkin
    After recently answering a couple of questions here on SO that involved utilizing PHP's list function, I wondered, "how in the world does that function actually work under the hood?". I was thinking about something like using func_get_args() and then iterating through the argument list, and that's all nice and peachy, but then how in the world does the assignment part work? list(...) = array($x, $y, $z); isn't this ^ evaluated first? So to be precise, my question is how is the list function able to create scoped variables which get assigned to the not-yet evaluated array?

    Read the article

  • .Net Designer assemblies, C++\C# error

    - by greggorob64
    I'm working on an designer-heavy application (using Visual C++ 2.0, but a C# solution should still be relevant). My setup is this: I have a UserControl named "Host" I'm attempting a UserControl named "Child" Child contains a property to a class whose type is defined in a different dll entirely, named "mytools.dll" Child works just fine in the designer. However, when I go to drag "child" onto "host" from the designer, I get the following error: Failed to create component 'Child'. The error message follows: 'System.io.filenotfoundexception: could not load file or assembly MyTools, Version XXXXXX, Culture=neutral ..... {unhelpful callstack} If I comment out the property in "child" that points to the class in mytools.dll, everything designs just peachy. I have the property marked with "Browsable(false), and DesignerSerializable(hidden), and it does not help. Is there a way for me to explicitly say "Don't load this dll, you won't need it in design time", or some way for me to force a dll to load from the designer programmatically? Thanks!

    Read the article

  • How can I run a function anytime anything is animated with jQuery?

    - by WillyCornbread
    Hi - I have some jQuery animations in my code to slide divs up and down in response to some mouse clicks and other logic. This is all working just peachy, however in IE 6 some of the smaller icon images on the page don't slide along with the rest of the div for some strange reason. They kind of stay put then flicker into the new position and I've chalked this up to an IE6 'feature'. Considering that I have to support IE6, I wanted to just hide the icons anytime an animation started, and show them again when the queue was empty. I couldn't find a reference to any kind of events or hooks into the queue itself and I'd rather not add the hide code, then the show code to every animation as a callback. Thanks if you can help- b

    Read the article

  • How can I tell xCode to recheck project resources that have been modified?

    - by Nick
    I'm working with a designer friend on an iPhone app and he likes to refine all sorts of images relating to the project we're working on. All these images have been added to the project previously (and added to the project folder by xcode) and then are modified in their new location. When I preview the images in xCode, the updated images show up but building and running in the simulator or on a device doesn't pick up the new image. In fact, if I do a clean build it seems to ignore the image all together and blank spaces appear where images should be. Now, I can delete these files from the project and re-add them and everything works peachy again. But there are a lot of them and I'd rather not do that every time an image is updated. Is there a way to get xCode to review and "learn" about these modified images? Is there a good reason for why it's not doing that automatically?

    Read the article

  • 80 Years of Supplier Misinformation: How can Oracle Supplier Hub Help?

    - by Mala Narasimharajan
    By Mark Peachy       Well, we're down to the final week before this year's Oracle Open World conference kicks-off on Sunday and there's still plenty of work to be done to be ready in time.  One of the great benefits I think that attendees get from Open World is the opportunity to listen to other organizations talk about their implementation experiences.  Typically, these sessions provide hugely valuable insights that have been gained during a deployment, delivering a wealth of practical information on what it really takes to get an organization up and running with a new module or a revamped business process.And I'm not just saying this because we're lucky enough to have one of our early implementers join us for this year's Supplier Hub/Supplier Lifecycle Management MDM session!  With a multi-phased deployment underway, this customer is working to fix a long, 80-year history without much in the way of formal processes or tools to manage all of their accumulated supplier information.  Faced with a mess of supplier details, they had been challenged to efficiently track supplier spend, monitor performance, maintain qualification information or carry out meaningful risk analysis.  Join us on Wednesday to hear how they are addressing these issues and the plans they have to evolve their supplier management techniques - it's a great story.CON9242:  Oracle Supplier Lifecycle Management and Oracle Supplier Hub for Better Supply Base Management Wednesday, October 3rd at 1:15 PM                                                                                                                                                InterContinental Hotel, Sutter Suite

    Read the article

  • Microsoft Office 2003 applications crash on 'Save As' to a network mapped drive

    - by Archit Baweja
    Hey guys, so I'm not sure if it belongs on ServerFault forums so figured I'd ask here first because its a workstation/client side issue. I have a client where we have windows server 2003 setup, with windows xp professional setup on all the workstations. We've setup a 'domain' and all workstations logon to the domain (authenticated by the Windows Domain Controller), and in the logon script we map drives on to each workstation. Everything is working peachy except for one workstation, where when I open a file in excel from a mapped drive, it opens fine, but when I go to hit Save As, the Save As dialog pops and hangs up. I cannot perform any other action in excel. When I try cancel the Save As dialog, excel crashes. The mapped drive opens up fine in Windows Explorer. To further investigate this issue, I created a new blank text document on the network drive in Windows Explorer. I then opened it. Then hit save as, and the Save As dialog opened up fine and it would let me save the document. I repeated the above steps for a word document. However this time the Save As dialog hung/froze again. So I'd imagine its a Microsoft Office Issue. Any ideas?

    Read the article

  • What could cause a huge packet loss in Ubuntu 9.10, for both wired and wireless?

    - by xzenox
    I was previously using 9.04 fine (and in fact, I am posting this from my old 9.04 live cd). I tested the following install steps in a virtualbox vm prior to following the sames ones to upgrade my laptop: Download/burn ubuntu minimal cd (12mb one) Install ubuntu minimal sudo apt-get update sudo apt-get upgrade sudo apt-get ubuntu-desktop ubuntu-standard In the VM worked fine and I found myself with a working 9.10 ubuntu, network worked fine and I was able to test my backups and DropBox without a hitch (host was 9.04). When I followed the same steps on my laptop, everything worked up to after 9.10 being installed and working. As far as I can tell, everything besides eth0/wireless works. For some reason, I am unable to access the internet. Ping reports that over 99% of packets get lost (over an hour or so of pinging). This means for example that if I try hard enough, I can load a webpage but only at the cost of much patience... This happens both for a wired and wireless connection to my wrt310n (updated with latest firmware). At first I thought that it could be related to the ipv6 issues ppl have been experiencing however even after disabling ipv6 at the kernel level (through grub), I still get the issue. I do not think this is related to DNS issues or the likes since even when I ping my ISP's gateway IP, I have the same amount of packet loss. No DNS resolving should be required there. Access to my router works peachy with no packet loss there. I've tried different MTU values but to no avail. Note that this issue affects every web-enabled application: firefox, ping, synaptic, etc. The same hardware/router combo works with 9.04 but not with 9.10. In fact, when I did: sudo apt-get ubuntu-desktop ubuntu-standard after 9.10 minimal was installed, it downloaded over 400mb of packages without a hitch so my guess is that one of those packages either in ubuntu-desktop or ubuntu-standard is causing havok. Thoughts?

    Read the article

  • Updating Dell PowerEdge firmware on Ubuntu?

    - by Shtééf
    The company I work for recently got hands on a batch of second hand PowerEdge SC1425 machines. We'd like to put these to good use. Our operating system of choice is Ubuntu Server 10.04 64-bit, which installs just peachy on this type of machine. Now I'd like to install the firmware updates from Dell, which are apparently marked as recommended. This includes the updates for the BIOS, the BMC, and possibly some other hardware. I find it incredibly difficult to locate the files on the Dell website, and install any of them on an Ubuntu system: I downloaded the file OM_6.2.0_SUU_A01.iso. I believe I've read that the SUU DVD should be able to update any recent PowerEdge. Is this correct? Is this the latest version? Besides the version number, does A01 have any meaning? Is this image bootable? (At the moment, I just nosed around with a loop device mount.) Running /bin/bash ./suu from the DVD, I get: # /bin/bash ./suu ./suu: line 262: ./java/linux/i386/bin/java: No such file or directory The file exists and is executable, though. But I cannot execute it directly from the shell either.

    Read the article

  • Microsoft Office 2003 applications crash on 'Save As' to a network mapped drive

    - by Archit Baweja
    Hey guys, so I'm not sure if it belongs on ServerFault forums so figured I'd ask here first because its a workstation/client side issue. I have a client where we have windows server 2003 setup, with windows xp professional setup on all the workstations. We've setup a 'domain' and all workstations logon to the domain (authenticated by the Windows Domain Controller), and in the logon script we map drives on to each workstation. Everything is working peachy except for one workstation, where when I open a file in excel from a mapped drive, it opens fine, but when I go to hit Save As, the Save As dialog pops and hangs up. I cannot perform any other action in excel. When I try cancel the Save As dialog, excel crashes. The mapped drive opens up fine in Windows Explorer. To further investigate this issue, I created a new blank text document on the network drive in Windows Explorer. I then opened it. Then hit save as, and the Save As dialog opened up fine and it would let me save the document. I repeated the above steps for a word document. However this time the Save As dialog hung/froze again. So I'd imagine its a Microsoft Office Issue. Any ideas?

    Read the article

  • Windows 7 sharing folder from command line, selecting users and triggering the "Apply" of changes

    - by clintp
    I have a drive that doesn't get mounted until after I log in. (A Truecrypt thumbdrive device, and no, I'm not making it a "System Favorite" to get around this.) I'd like to construct a batch file to share it once I've gotten it mounted because the sharing info doesn't seem to stick through a reboot. From the GUI, I'd go into the folder Properties-Sharing. And then in Advanced Sharing I'd pick the name to share it as. And then under the "Share..." button I'd pick the users and the permissions I want to grant them. After "Apply" there's a pause -- I'm not sure what's happening here, but the dialog says "Sharing Items..." -- and then everything is okay. From the command line, I've done: net share MyFolder=F:\MyFolder cacls F:\MyFolder /G FirstUser:F cacls F:\MyFolder /G OtherUser:F And this almost works. I can see the share on the network then, but nobody has permissions to do anything. If I go into the GUI and change anything (and I can see my command-line changes in there already) and press "Apply" I get the: "Sharing Items.... This may take a few minutes" Dialog... and then Voila! It works. I get the "Your folder is shared" dialog with the command-line changes I made, along with the GUI change that I made to trigger the "Sharing Items..." dialog. Everything's peachy. Is a service being restarted? Which one? What's triggering the sharing to take effect? And -- more importantly -- how do I do it from the command line?

    Read the article

  • XNA 4 Deferred Rendering deforms the model

    - by Tomáš Bezouška
    I have a problem when rendering a model of my World - when rendered using BasicEffect, it looks just peachy. Problem is when I render it using deferred rendering. See for yourselves: what it looks like: http://imageshack.us/photo/my-images/690/survival.png/ what it should look like: http://imageshack.us/photo/my-images/521/survival2.png/ (Please ignora the cars, they shouldn't be there. Nothing changes when they are removed) Im using Deferred renderer from www.catalinzima.com/tutorials/deferred-rendering-in-xna/introduction-2/ except very simplified, without the custom content processor. Here's the code for the GBuffer shader: float4x4 World; float4x4 View; float4x4 Projection; float specularIntensity = 0.001f; float specularPower = 3; texture Texture; sampler diffuseSampler = sampler_state { Texture = (Texture); MAGFILTER = LINEAR; MINFILTER = LINEAR; MIPFILTER = LINEAR; AddressU = Wrap; AddressV = Wrap; }; struct VertexShaderInput { float4 Position : POSITION0; float3 Normal : NORMAL0; float2 TexCoord : TEXCOORD0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 TexCoord : TEXCOORD0; float3 Normal : TEXCOORD1; float2 Depth : TEXCOORD2; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4 viewPosition = mul(worldPosition, View); output.Position = mul(viewPosition, Projection); output.TexCoord = input.TexCoord; //pass the texture coordinates further output.Normal = mul(input.Normal,World); //get normal into world space output.Depth.x = output.Position.z; output.Depth.y = output.Position.w; return output; } struct PixelShaderOutput { half4 Color : COLOR0; half4 Normal : COLOR1; half4 Depth : COLOR2; }; PixelShaderOutput PixelShaderFunction(VertexShaderOutput input) { PixelShaderOutput output; output.Color = tex2D(diffuseSampler, input.TexCoord); //output Color output.Color.a = specularIntensity; //output SpecularIntensity output.Normal.rgb = 0.5f * (normalize(input.Normal) + 1.0f); //transform normal domain output.Normal.a = specularPower; //output SpecularPower output.Depth = input.Depth.x / input.Depth.y; //output Depth return output; } technique Technique1 { pass Pass1 { VertexShader = compile vs_2_0 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } And here are the rendering parts in XNA: public void RednerModel(Model model, Matrix world) { Matrix[] boneTransforms = new Matrix[model.Bones.Count]; model.CopyAbsoluteBoneTransformsTo(boneTransforms); Game.GraphicsDevice.DepthStencilState = DepthStencilState.Default; Game.GraphicsDevice.BlendState = BlendState.Opaque; Game.GraphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; foreach (ModelMesh mesh in model.Meshes) { foreach (ModelMeshPart meshPart in mesh.MeshParts) { GBufferEffect.Parameters["View"].SetValue(Camera.Instance.ViewMatrix); GBufferEffect.Parameters["Projection"].SetValue(Camera.Instance.ProjectionMatrix); GBufferEffect.Parameters["World"].SetValue(boneTransforms[mesh.ParentBone.Index] * world); GBufferEffect.Parameters["Texture"].SetValue(meshPart.Effect.Parameters["Texture"].GetValueTexture2D()); GBufferEffect.Techniques[0].Passes[0].Apply(); RenderMeshpart(mesh, meshPart); } } } private void RenderMeshpart(ModelMesh mesh, ModelMeshPart part) { Game.GraphicsDevice.SetVertexBuffer(part.VertexBuffer); Game.GraphicsDevice.Indices = part.IndexBuffer; Game.GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, part.NumVertices, part.StartIndex, part.PrimitiveCount); } I import the model using the built in content processor for FBX. The FBX is created in 3DS Max. I don't know the exact details of that export, but if you think it might be relevant, I will get them from my collegue who does them. What confuses me though is why the BasicEffect approach works... seems the FBX shouldnt be a problem. Any thoughts? They will be greatly appreciated :)

    Read the article

  • MVC App Works in Visual Studio, but not IIS7

    - by kesh
    Working on a an ASP.NET MVC Project, and I'm having some difficulties deploying to a shared dev server. Locally, when debugging using the local Visual Studio 2008 server, everything works peachy. However, once deployed, I receive the following error: Parser Error Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately.## Heading ## Parser Error Message: Unable to find an entry point named 'BCryptGetFipsAlgorithmMode' in DLL 'bcrypt.dll'. Source Error: Line 1: <%@ Application Codebehind="Global.asax.cs" Inherits="APPLICATION_NAME.Web.MvcApplication" Language="C#" %> Source File: /APPLICATION_NAME/global.asax Line: 1 Version Information: Microsoft .NET Framework Version:2.0.50727.4927; ASP.NET Version:2.0.50727.4927 In the error log: Event sequence: 1 Event occurrence: 1 Event detail code: 0 Application information: Application domain: /LM/W3SVC/1/ROOT/APPLICATION_NAME-4-128995312096183595 Trust level: Full Application Virtual Path: /APPLICATION_NAME Application Path: E:\PROJECTS\APPLICATION\APPLICATION_NAME\APPLICATION_NAME\app\APPLICATION_NAME.Web\ Machine name: PC Process information: Process ID: 4608 Process name: w3wp.exe Account name: IIS APPPOOL\DefaultAppPool Exception information: Exception type: HttpException Exception message: Unable to find an entry point named 'BCryptGetFipsAlgorithmMode' in DLL 'bcrypt.dll'. Request information: Request URL: http://localhost/APPLICATION_NAME Request path: /APPLICATION_NAME User host address: ::1 User: Is authenticated: False Authentication Type: Thread account name: IIS APPPOOL\DefaultAppPool Thread information: Thread ID: 6 Thread account name: IIS APPPOOL\DefaultAppPool Is impersonating: False Stack trace: at System.Web.Compilation.BuildManager.ReportTopLevelCompilationException() at System.Web.Compilation.BuildManager.EnsureTopLevelFilesCompiled() at System.Web.Hosting.HostingEnvironment.Initialize(ApplicationManager appManager, IApplicationHost appHost, IConfigMapPathFactory configMapPathFactory, HostingEnvironmentParameters hostingParameters) Custom event details: After finding the deployment error, I tried adding an application locally, and that seems to result in the same application. On my local dev machine, I'm using Windows 7 RTM (x64), and on the shared server I'm running Windows Server 2008 Standard (x86). Poked around, and my FIPS encryption in Local Security Policy is disabled, so I'm at a bit of a loss.

    Read the article

  • Access denied error on select into outfile using Zend

    - by Peter
    Hi, I'm trying to make a dump of a MySQL table on the server and I'm trying to do this in Zend. I have a model/mapper/dbtable structure for all my connections to my tables and I'm adding the following code to the mappers: public function dumpTable() { $db = $this->getDbTable()->getAdapter(); $name = $this->getDbTable()->info('name'); $backupFile = APPLICATION_PATH . '/backup/' . date('U') . '_' . $name . '.sql'; $query = "SELECT * INTO OUTFILE '$backupFile' FROM $name"; $db->query( $query ); } This should work peachy, I thought, but Message: Mysqli prepare error: Access denied for user 'someUser'@'localhost' (using password: YES) is what this results in. I checked the user rights for someUser and he has all the rights to the database and table in question. I've been looking around here and on the net in general and usually turning on "all" the rights for the user seems to be the solution, but not in my case (unless I'm overlooking something right now with my tired eyes + I don't want to turn on "all" on my production server). What am I doing wrong here? Or, does anybody know a more elegant way to get this done in Zend?

    Read the article

  • Porting a web application to work in IE7

    - by Bears will eat you
    I'm developing a web application that uses lots of Javascript and CSS, both of my own creation and through third-party libraries. These include jQuery and Google Maps & Visualization JS APIs. I've been testing everything in Firefox 3. Things are peachy until it turns out the main target of this webapp is (cue sad trombone) IE7. I'm looking for caveats, advice, libraries, or other references to help make this transition as easy as possible (not that it's actually going to be easy). I've already tried IE7.js though it hasn't yet shown itself to be the silver bullet I was hoping for. I'm sure that it works as advertised, I think it's just not as all-encompassing as I'd like (example: colors like #4684EE and #DC3912, which are correctly rendered in FF3, are rendered as black in IE7, with or without IE7.js). Are there other libraries out there to help bring IE7 (more) in line with FF3? A corollary question: what debugger would you recommend for IE7? I'm currently using Firebug Lite, but it runs painfully slowly. Is there anything out there with similar features that I might have missed?

    Read the article

  • Colorbox iFrame content not appearing in IE 8/9

    - by Rocketpig
    I'm using ColorBox to call a few informational modals on-screen and given the client's requirements, the best way to do this is via iFrames (not my first choice but whatever). Everything is working peachy in Chrome, FF, etc. but the iFrame content is not working in any version of IE. The modal wrapper appears but nothing is inside. This is what I've done so far: Changed the doctype to transitional and strict for IE. No dice. Removed the "iframe: true" and replaced it with HTML "Hello". That worked fine and "Hello" appeared in the Colorbox modal. I've removed all stylesheets from the header. No luck so it's not a CSS issue. Just to be sure, I rolled back my JQuery library to 1.6.2 from 1.8.2. Nothing there, either. Any help would be appreciated. This is aggravating. Some code: $(function () { $(".modal-large").colorbox({iframe:true, innerWidth:580, innerHeight:500}); }) HTML: <div class="top-droptext"><a class="modal-large" href="modal/serviceproviderinfo.html">Update Password</a></div>

    Read the article

  • VMware Player loses internet connectivity

    - by Martha
    Periodically, the internet simply stops working in my virtual machine, and the only way I can get it working again is to restart the host computer. Since I use the virtual machine specifically for testing web pages, this is, shall we say, a bother. Details: I have Windows XP Pro running in VMware Player (v. 3.0.0 build-203739) on a Windows 7 host. It's set to NAT (shared IP address) because the firewall won't allow a bridged connection. Every couple of days or so, first the internet slows down to a crawl, then eventually it stops working altogether. Both VMWare and the virtual OS report that they are connected, everything looks just peachy, I can reach the internet from the host, but on the VM, all web pages time out and/or report that the server could not be found. (Browser-independent; tried with IE, FF, Chrome, Safari, and Opera.) When this happens, the only way I've found to restore the internet connectivity is to restart the host machine. Restarting the VM doesn't help, nor does refreshing network connections on either the host or the guest. (Although I'm not entirely sure I've found the proper way to refresh a network connection in Windows 7...) I have not noticed any predictability about when the problem occurs, i.e. it's not immediately after I do anything special. It seems to occur mostly after putting the host to sleep once or twice, but it has happened even if the host has been in continuous use. It also seems independent of when I start using the VM - sometimes, I wake up the VM and the internet is really slow in it, then eventually stops working altogether; other times, I wake up the VM, use it perfectly happily for a while, then suddenly the internet is gone. Does anyone know why this is occurring? Failing that, is there a workaround that's less drastic than restarting the host? (Windows 7 startup times are blazingly fast compared to previous versions of Windows, but it's still a hassle to close all my programs and reopen them again.) Edit: while badges overall are nice, the Tumbleweed badge isn't helping me to solve my problem. Hasn't anyone encountered anything even remotely similar?

    Read the article

1 2  | Next Page >