Search Results

Search found 8161 results on 327 pages for 'explicit loading'.

Page 175/327 | < Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >

  • Viewing file: protocol requests made from a swf

    - by Erik Vold
    I've got a swf file in a index.html file, the swf is basically loading images, but it requests a xml file which details where the image/asset files are located. Now the index file (which is just html, js, and css) works (ie: the swf file is working) when used on this url: http://localhost:8500/core/index.html which I'm able to do with Coldfusion 8 single server development environment. But when I access the file directly with this url: file:///C:/ColdFusion8/wwwroot/core/index.html the swf file does not work. So I'm guessing that the swf is having trouble locating the files that it needs. The problem is I have no idea what file urls it is try to access atm, and both Firebug and Fiddler are not able to inform me what requests are made on the file: protocol. So is there another tool that I can use?

    Read the article

  • how to workaround glassfish 3.0.1 admin page slow startup?

    - by black sensei
    I've installed netbeans which comes with glassfish. i just got a book about glassfish and i wanted to try.the first surprise is the time it took to the admin page to load. i've found on serverfault and by googling that the server is making call to external resource on the network on online (not sure about that). but the adding of the java options didn't speed up the loading of the admin page. how to work around it? i've heard so good about glassfish that this leaves me perplexed. thanks for reading and for helping.

    Read the article

  • XAMPP Closes the Connection and won't let me download anything

    - by Miro Markarian
    I want my XAMPP Apache server to host a zip file (The file is around 250mb) but the server closes the connection and won't let me download the file! However webistes are loading correctly and It seems that the problem is with the extension Does xampp/apache have any file extension limit that they won't let me download .zip and .exe files? Also tested with a smaller .exe file , the problem is still present. It just doesn't let me download any file from the server.!!! Here is the file link to check: Test All I get in the error log is this: Fri Sep 07 23:21:31.742625 2012] [authz_core:debug] [pid 3664:tid 396] mod_authz_core.c(808): [client x.x.x.x:23409] AH01628: authorization result: granted (no directives), referer: http://ammiprox.tk/greeneyes2910/

    Read the article

  • How do you change the "scan this dir for additional ini files" path?

    - by amvx
    I managed to get the custom INI to load, but its still loading other .ini files from the default location. I created an fcgi wrapper that passed the ini value as a parameter. That worked. Now just these other ini's need to be loaded from the same dir as my custom ini. The problem is the other .ini files are overriding the settings in my custom php.ini =/ I realize the problem now is that the php.fcgi was compiled with a custom path parameter. So that's a problem. I might have to recompile it using a different location or none at all. I'd hate to have to compile an fcgi for each domain =/

    Read the article

  • VSTS2010 crashes on opening the team explorer->My queries node

    - by vstsuser
    VSTS2010 crashes on opening the team explorer-My queries node this does not repeat if MSN office communicator (2007 R2 version) is exited. the issue occurs only if MSN office communicator is up and running. (no issues reported by communicator though) Any help in this regard? Debug log: Unable to cast COM object of type 'CommunicatorAPI.MessengerClass' to interface type 'MessengerAPI.IMessenger2'. This operation failed because the QueryInterface call on the COM component for the interface with IID '{....}' failed due to the following error: Error loading type library/DLL. (Exception from HRESULT: 0x80029C4A (TYPE_E_CANTLOADLIBRARY)). Windows event log EventType clr20r3, P1 devenv.exe, P2 10.0.30319.1, P3 4ba1fab3, P4 microsoft.teamfoundation.collaboration.microsoft, P5 10.0.0.0, P6 4c7d8ce4, P7 49, P8 0, P9 system.invalidcastexception, P10 NIL.

    Read the article

  • Postgres pgpass windows - not working

    - by Scott
    DB: Postgres 9.0 Client: Windows 7 Server Windows 2008, 64bit I'm trying to connect remotely to a postgres instance for purposes of performing a pg_dump to my local machine. Everything works from my client machine, except that I need to provide a password at the password prompt, and I'd ultimately like to batch this with a script. I've followed the instructions here: http://www.postgresql.org/docs/current/static/libpq-pgpass.html but it's not working. To recap, I've created a file on the client (and tried the server as well): C:/Users/postgres/AppData/postgresql/pgpass.conf, where postgresql is the db user. The file has one line with the following data: *:5432:*postgres:[mypassword] (also tried explicit ip/dbname values, all asterisks, and every combination in between. (I've also tried replacing each '*' with [localhost|myip] and [mydatabasename] respectively. From my client machine, I connect using: pg_dump -h [myip] -U postgres -w [mydbname] [mylocaldumpfile] I'm presuming that I need to provide the '-w' switch in order to ignore password prompt, at which point it should look in the AppData directory on the server. It just comes back with "connection to database failed: fe_sendauth: no password supplied. Any insights are appreciated. As a hack workaround, if there was a way I could tell the windows batch file on my client machine to inject the password at the postgres prompt, that would work as well. Thanks.

    Read the article

  • Problems installing Windows service via Group Policy in a domain

    - by CraneStyle
    I'm reasonably new to Group Policy administration and I'm trying to deploy an MSI installer via Active Directory to install a service. In reality, I'm a software developer trying to test how my service will be installed in a domain environment. My test environment: Server 2003 Domain Controller About 10 machines (between XP SP3, and server 2008) all joined to my domain. No real other setup, or active directory configuration has been done apart from things like getting DNS right. I suspect that I may be missing a step in Group Policy that says I need to grant an explicit permission somewhere, but I have no idea where that might be or what it will say. What I've done: I followed the documentation from Microsoft in How to Deploy Software via Group Policy, so I believe all those steps are correct (I used the UNC path, verified NTFS permissions, I have verified the computers and users are members of groups that are assigned to receive the policy etc). If I deploy the software via the Computer Configuration, when I reboot the target machine I get the following: When the computer starts up it logs Event ID 108, and says "Failed to apply changes to software installation settings. Software changes could not be applied. A previous log entry with details should exist. The error was: An operations error occurred." There are no previous log entries to check, which is weird because if it ever actually tried to invoke the windows installer it should log any sort of failure of my application's installer. If I open a command prompt and manually run: msiexec /qb /i \\[host]\[share]\installer.msi It installs the service just fine. If I deploy the software via the User Configuration, when I log that user in the Event Log says that software changes were applied successfully, but my service isn't installed. However, when deployed via the User configuration even though it's not installed when I go to Control Panel - Add/Remove Programs and click on Add New Programs my service installer is being advertised and I can install/remove it from there. (this does not happen when it's assigned to computers) Hopefully that wall of text was enough information to get me going, thanks all for the help.

    Read the article

  • Need disc image help pronto!

    - by data
    I recently got a job as a junior network administrator. Last week the senior admins did their yearly reinstall of server 2003, exchange, drivers etc on the main server. I've been asked to back up the disc so that next year they can just copy over the pre-made image. What tools can i use to achieve both the creation of the entire servers HDD image and loading it back on (id like to test it in the sandbox.) To impress them, a program that is free is preferable. And maybe a tool that can do it all from booting the program off of a USB drive.

    Read the article

  • What is a good motherboard for the Intel i7 series processor?

    - by jasondavis
    I am lost on choosing a good miotherboard at a decent price for my Intel i7-930 CPU. I have read bad things about all of them so far. Issues with RAM not working correctly, Windows not loading with USB keyboards plugged in and all kinds of nightmare stories. Here is my needs. - 6g/b SATA 3 - USB 3.0 - Cheaper is better - Support for up to 24gb of RAM, I will start out with only 12gb though - I will use 64bit v. of Windows 7 so the 4gb RAM limit should not be a problem from my OS - Should be able to boot even if I have a USB keyboard plugged in. If you have experience with a motherboard that meets these specs, please do tell about it. I appreciate any help, I am stuck right now on picking a good board.

    Read the article

  • Connect trough remote computer connection

    - by Didac
    First, sorry for my english and my poor knowlodge of this subject. I have a dedicated server placed in Germany (windows 2008 R2) and I live in spain. I would like to access internet from my home computer (Windows 7 Pro x64), trough my server in Germany, so I can use a German IP, what I need some times. I have complete acces in to both computers, but I just don't know where to start. (My knwoledge is limited to software development :/ ) I'd like to know where to start, if I need to create a VPN and so.. Thanks in advance! Update 1 I tried a lot of options of OpenVPN, but I sadly I know nothing abuot networking, so I have to accept I do not know what I'm doing :( Here are my config files (note most of the options are from the sample config files). server.conf #server config file start port 1194 proto udp dev tun server 10.0.0.0 255.255.255.224 #you may choose any subnet. 10.0.0.x is used for this example. ca "C:\\Program Files (x86)\\OpenVPN\\easy-rsa\\keys\\ca.crt" cert "C:\\Program Files (x86)\\OpenVPN\\easy-rsa\\keys\\server.crt" key "C:\\Program Files (x86)\\OpenVPN\\easy-rsa\\keys\\server.key" dh "C:\\Program Files (x86)\\OpenVPN\\easy-rsa\\keys\\dh1024.pem" push "redirect-gateway def1" push "dhcp-option DNS 8.8.8.8" #the following commands are optional keepalive 10 120 comp-lzo persist-key persist-tun verb 5 #config file ends client.conf #client config file start client dev tun proto udp remote 176.9.99.180 1194 resolv-retry infinite nobind persist-key persist-tun ca "C:\\Program Files (x86)\\OpenVPN\\easy-rsa\\keys\\ca.crt" cert "C:\\Program Files (x86)\\OpenVPN\\easy-rsa\\keys\\client1.crt" key "C:\\Program Files (x86)\\OpenVPN\\easy-rsa\\keys\\client1.key" ns-cert-type server comp-lzo verb 5 explicit-exit-notify 2 ping 10 ping-restart 60 route-method exe route-delay 2 # end of client config file And here's the server's network settings: IP address: 176.9.99.180 Subnet mask: 255.255.255.224 Default gateway: 176.9.99.161 Preferred DNS server: 127.0.0.1

    Read the article

  • Dropped Dell XPS has hard-drive trouble

    - by Alex B.
    Yesterday, my mom dropped her laptop to the floor and got the blue screen of death after trying to boot the system. I was able to start a Fedora live CD and get some of her stuff off the hard drive, but I cannot seem to be able to install Windows on it. The installation starts loading files and then the computer turns off. I am thinking that she might need a new hard drive. Any ideas? Edit: I actually was able to boot the Windows XP installation but it is saying that no hard disk is detected. How can this be possible if I was able to mount the drive on fedora yesterday?

    Read the article

  • ioncube not load after upgrading to php 5.4

    - by amir
    i am afraid that i broke somthing in my vps :/ i hope you can help me. i am on ubuntu-12.04-x86. and i moved to new vps so i tryied to upgrade the php to the news version from 5.3 to 5.4. anyway after installing i get this messages: fild loading /usr/lib/php5/20090626+lfs/ioncube_loader_lin_5.3.so /usr/php5/20090625+lfs/ioncube_loader_lin_5.3.so: undefined synbol: php_body_wri php 5.4.8-1~presise+1 (cli) (built(oct 29 2012) i need to mention that the server is working and php also working but when i do phpinfo there is no "with the ionCube PHP Loader v4.0.14, Copyright (c) 2002-2011, by ionCube Ltd." which was before :/ i installed with this guide:http://www.upubuntu.com/2012/03/how-to-upgrade-install-php-540-under.html i need to worry?

    Read the article

  • Minimal Fedora Installation

    - by MA1
    I am working on a Fedora 13 minimal iso image with no Desktop environment. I just need wxPython support. I removed the gnome from kickstart and build the iso image using livecd-creator. But now my application is not loading as gnome is gone. Earlier my application(myapp.desktop) was placed in /usr/share/gnome/autostart/ and starts automatically. So what should i do now the run my application? To run my wxPython appalication should i have to install display manager(xdm, gdm, kdm etc..)? If i install gdm, it takes lots of space. what should i do? In short i need Fedora 13 minimal iso image with no desktop environment and with wxPython support.

    Read the article

  • UBUNTU.. GRUB problem

    - by pravinp
    Hi, I have/(had) windows xp on one partition of my hdd. on second i installed ubuntu 9.10 yesterday. and after reboot of windows xp i get error "GRUB LOADING.". now i know that you can play around with that using live GRUB and all. putting GRUB on top of MBR or etc etc. something like that. i want to know - if i reinstall Ubuntu 10.04 which is available now - will it give me still the same error? or i will be able to use both OS? (i.e. is that problem solved in version 10.04 or still there? and if it is solved reinstallation of ubuntu will solve the problem or not?) anyhelp is welcome. Thanks

    Read the article

  • How do I recover files from a corrupt VDI file?

    - by Eric P
    Is it possible to repair a corrupt VDI file? The OS on the VDI (XP) doesn't boot at all, it just hangs at a black screen. I was getting file errors before on its last boot, but now its not working at all. Sector viewer shows 'Invalid partition table Error loading operating system Missing operating system'. I tried mounting the file from the host OS, but it just says that the drive isn't formatted. I don't need to be able to run the VDI, but I do need some files that are on it. Is there any way to recover files from the corrupt VDI file?

    Read the article

  • How do I setup a local DNS server on Mac OS Lion?

    - by Peter Kovacs
    I had some serious lag to resolve website address and sometimes things simply wouldn't load (pages kept loading for 5+ minutes without even a timeout error). So I had setup a local dns server/cache using BIND on Leopard and Snow Leopard. Now that I have Lion, i have the same problem, but the instructions no longer apply to Lion and I can't find a way to do it. Has anyone attempted to do this? Are there viable alternatives for DNS servers on OS X 10.7? For those who are wondering I already tried several external DNS server. Only my computer has this issue on the network.

    Read the article

  • modprobe ndiswrapper - not found

    - by David
    I'm trying to use ndiswrapper on a Slackware 12 (I think) box, but I'm running into a problem with modprobe. Everything I find online says that it should be working, but for some unknown reason it isn't. Here's what I've done so far: Installed ndiswrapper (latest tarball, make, make install) Ran ndiswrapper -i on the WinXP driver for my USB wireless card Ran ndiswrapper -l which tells me the driver is present and the device is present (lsusb also confirms the device is present) Ran ndiswrapper -m which put an alias for wlan0 in /etc/modprobe.d/ndiswrapper.conf Ran depmod -a Ran modprobe ndiswrapper which tells me "FATAL: module ndiswrapper not found" Ran modprobe -l which shows no listing for ndiswrapper I even tossed in a reboot or two while trying various combinations of the above, still nothing. So naturally ifconfig wlan0 up isn't working because the device isn't being created, presumably because the module isn't loading the driver. Does anybody have any suggestions? Everything points to the notion that this should work fine, but modprobe just isn't able to find what it needs. Have I missed an important step?

    Read the article

  • Cisco access-list confusion

    - by LonelyLonelyNetworkN00b
    I'm having troubles implementing access-lists on my asa 5510 (8.2) in a way that makes sense for me. I have one access-list for every interface i have on the device. The access-lists are added to the interface via the access-group command. let's say I have these access-lists access-group WAN_access_in in interface WAN access-group INTERNAL_access_in in interface INTERNAL access-group Production_access_in in interface PRODUCTION WAN has security level 0, Internal Security level 100, Production has security level 50. What i want to do is have an easy way to poke holes from Production to Internal. This seams to be pretty easy, but then the whole notion of security levels doesn't seam to matter any more. I then can't exit out the WAN interface. I would need to add an ANY ANY access-list, which in turn opens access completely for the INTERNAL net. I could solve this by issuing explicit DENY ACEs for my internal net, but that sounds like quite the hassle. How is this done in practice? In iptables i would use a logic of something like this. If source equals production-subnet and outgoing interface equals WAN. ACCEPT.

    Read the article

  • Postgres pgpass windows - not working

    - by Scott
    DB: Postgres 9.0 Client: Windows 7 Server Windows 2008, 64bit I'm trying to connect remotely to a postgres instance for purposes of performing a pg_dump to my local machine. Everything works from my client machine, except that I need to provide a password at the password prompt, and I'd ultimately like to batch this with a script. I've followed the instructions here: http://www.postgresql.org/docs/current/static/libpq-pgpass.html but it's not working. To recap, I've created a file on the client (and tried the server as well): C:/Users/postgres/AppData/postgresql/pgpass.conf, where postgresql is the db user. The file has one line with the following data: *:5432:*postgres:[mypassword] (also tried explicit ip/dbname values, all asterisks, and every combination in between. (I've also tried replacing each '*' with [localhost|myip] and [mydatabasename] respectively. From my client machine, I connect using: pg_dump -h [myip] -U postgres -w [mydbname] [mylocaldumpfile] I'm presuming that I need to provide the '-w' switch in order to ignore password prompt, at which point it should look in the AppData directory on the server. It just comes back with "connection to database failed: fe_sendauth: no password supplied. Any insights are appreciated. As a hack workaround, if there was a way I could tell the windows batch file on my client machine to inject the password at the postgres prompt, that would work as well. Thanks.

    Read the article

  • Impersonation on IIS 7.0 passes the machine credentials for Crystal Reports

    - by pknox
    On a 32-bit Windows 2008 server running the Donor2 Application in the Classic .NET Managed Pipeline mode, configured for Windows Integrated Authentication and Impersonation, all of the .NET pages are passing the authenticated user’s credentials [DomainName\UserName]. This is the correct, expected behavior. The Crystal Reports pages, instead of passing the authenticated user’s credentials, are passing the IIS Server’s credentials [DomainName\MachineName$]. One of the very frustrating aspects of this situation is that I have another server which, as far as I can tell, is configured identically. That server, when loading Crystal Reports, is passing the authenticated user’s credentials [DomainName\UserName] as expected. I have obviously missed something, but I have no idea what it could be.

    Read the article

  • CentOS - Yum doesn't update anymore?

    - by Xanathos
    I've been trying to use yum now, but for some reason, not even the search work anymore. I even tried putting packages I already downloaded in the search criteria and is the same. [root@AMDFX03 Downloads]# yum search glibc Loaded plugins: fastestmirror, refresh-packagekit, security Loading mirror speeds from cached hostfile epel/metalink | 22 kB 00:00 * base: centos.secrel.com.br * epel: archive.linux.duke.edu * extras: centos.secrel.com.br * rpmforge: apt.sw.be * updates: centos.secrel.com.br adobe-linux-x86_64/primary | 1.2 kB 00:00 http://linuxdownload.adobe.com/linux/x86_64/repodata/primary.xml.gz: [Errno -1] Metadata file does not match checksum Trying other mirror. Error: failure: repodata/primary.xml.gz from adobe-linux-x86_64: [Errno 256] No more mirrors to try. This error always appear no matter what I do. Please, can you tell me how to fix this, or at least how to reset yum's configuration?

    Read the article

  • Load multiple commands to execute in AutoCAD command

    - by JNL
    I am trying to load the .arx file for a particular program, once the User clicks on the customized tab in AUtoCAD. Requirements are: 1.When User Clicks on the customized tab, it loads the .arx 2.Runs the program 3.Unload the .arx file. I achieved Point 2 by acedRegCmds-(); I tried getting point 1 and 3 working, by using acedArxLoad(_T("C:\Example\Example.arx")); command, but it did not work. Any help with the same would be highly appreciated Also just curious, should loading multiple commands in LISP work for this?

    Read the article

  • Trying to install SawMill and getting the following error:

    - by Itai Ganot
    [root@sawmill sawmill]# ./sawmill ./sawmill: error while loading shared libraries: libldap-2.3.so.0: cannot open shared object file: No such file or directory Using yum provides libldap_r-2.3.so.0 i found that the package which includes this file is: compat-openldap-2.3.43-2.el6.i686 . After installing it i still get the error. If i use locate, i can find the file in /usr/lib, so I tried to create a symbolic link to the file from /usr/lib to /usr/lib64 but i still get the same error. I also tried setting LD_LIBRARY_PATH=/usr/lib/ and LD_LIBRARY_PATH=/usr/lib64 but it doesn't allow me to run the sawmill installation script. Anyone knows how to solve this issue?

    Read the article

  • Share one ssl certificate between multiples vhost

    - by Cesar
    I have a setup like this: <VirtualHost 192.168.1.104:80> ServerName domain1 DocumentRoot /home/domain/public_html ... </VirtualHost> <VirtualHost 192.168.1.104:80> ServerName domain2 DocumentRoot /home/domain2/public_html ... </VirtualHost> <VirtualHost 192.168.1.104:80> DocumentRoot /home/domain3/public_html ServerName domain3 ... </VirtualHost> <VirtualHost 192.168.1.104:443> ServerName domain3 SSLCertificateFile /usr/share/ssl/certs/certificate.crt SSLCertificateKeyFile /usr/share/ssl/private/private.key SSLCACertificateFile /usr/share/ssl/certs/bundle.cabundle ... </VirtualHost> I want to use domain3 certificate in the other domains, preferably without having to repeat all the <VirtualHost 192.168.1.104:443> config. In other words I want something like this: If the vhost has no explicit ssl config use cert for domain3 (/usr/share/ssl/certs/certificate.crt) Notes: 1.- I for sure will be setting more vhosts in the future 2.- I know (and don't care) of the ssl warnings the browser will show (hostname mismatch) If this possible? how?

    Read the article

  • Generating cache file for Twitter rss feed

    - by Kerri
    I'm working on a site with a simple php-generated twitter box with user timeline tweets pulled from the user_timeline rss feed, and cached to a local file to cut down on loads, and as backup for when twitter goes down. I based the caching on this: http://snipplr.com/view/8156/twitter-cache/. It all seemed to be working well yesterday, but today I discovered the cache file was blank. Deleting it then loading again generated a fresh file. The code I'm using is below. I've edited it to try to get it to work with what I was already using to display the feed and probably messed something crucial up. The changes I made are the following (and I strongly believe that any of these could be the cause): - Revised the time difference code (the linked example seemed to use a custom function that wasn't included in the code) Removed the "serialize" function from the "fwrites". This is purely because I couldn't figure out how to unserialize once I loaded it in the display code. I truthfully don't understand the role that serialize plays or how it works, so I'm sure I should have kept it in. If that's the case I just need to understand where/how to deserialize in the second part of the code so that it can be parsed. Removed the $rss variable in favor of just loading up the cache file in my original tweet display code. So, here are the relevant parts of the code I used: <?php $feedURL = "http://twitter.com/statuses/user_timeline/#######.rss"; // START CACHING $cache_file = dirname(__FILE__).'/cache/twitter_cache.rss'; // Start with the cache if(file_exists($cache_file)){ $mtime = (strtotime("now") - filemtime($cache_file)); if($mtime > 600) { $cache_rss = file_get_contents('http://twitter.com/statuses/user_timeline/75168146.rss'); $cache_static = fopen($cache_file, 'wb'); fwrite($cache_static, $cache_rss); fclose($cache_static); } echo "<!-- twitter cache generated ".date('Y-m-d h:i:s', filemtime($cache_file))." -->"; } else { $cache_rss = file_get_contents('http://twitter.com/statuses/user_timeline/#######.rss'); $cache_static = fopen($cache_file, 'wb'); fwrite($cache_static, $cache_rss); fclose($cache_static); } //END CACHING //START DISPLAY $doc = new DOMDocument(); $doc->load($cache_file); $arrFeeds = array(); foreach ($doc->getElementsByTagName('item') as $node) { $itemRSS = array ( 'title' => $node->getElementsByTagName('title')->item(0)->nodeValue, 'date' => $node->getElementsByTagName('pubDate')->item(0)->nodeValue ); array_push($arrFeeds, $itemRSS); } // the rest of the formatting and display code.... } ?> ETA 6/17 Nobody can help…? I'm thinking it has something to do with writing a blank cache file over a good one when twitter is down, because otherwise I imagine that this should be happening every ten minutes when the cache file is overwritten again, but it doesn't happen that frequently. I made the following change to the part where it checks how old the file is to overwrite it: $cache_rss = file_get_contents('http://twitter.com/statuses/user_timeline/75168146.rss'); if($mtime > 600 && $cache_rss != ''){ $cache_static = fopen($cache_file, 'wb'); fwrite($cache_static, $cache_rss); fclose($cache_static); } …so now, it will only write the file if it's over ten minutes old and there's actual content retrieved from the rss page. Do you think this will work?

    Read the article

< Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >