Search Results

Search found 22308 results on 893 pages for 'floating point'.

Page 661/893 | < Previous Page | 657 658 659 660 661 662 663 664 665 666 667 668  | Next Page >

  • How do I get to the bottom of network latency and bandwidth issues

    - by three_cups_of_java
    I recently moved two blocks south. That move moved me from Comcast to Broadstripe (high-speed internet cable providers). Comcast was pretty good. Broadstripe sucks. I called them on the phone, and they basically brushed me off (politely). I want to come to them with some numbers, so I can say more than just "it's really slow". I still have access to my old Comcast service, so I can run the tests using both providers. Here's what I'm seeing with my new Broadstripe service: 1) When I browse to most sites, there is a long delay (5-10 seconds) before the page starts loading in my browser 2) The speed test tell me I have 12 megs down (bullshit) 3) I have a server at my office. I just downloaded some files (using scp on the command line). It said I'm getting 3.5 KB/s I'm an experienced programmer and spend most of my days on the command line and in vim. Networking, however is not a strong point. I've played around with traceroute, but I'm not sure if that's the right tool to use. I have access to servers all over the country (I would just use Amazon EC2 to set up a test server), and I prefer to use Ubuntu for my testing. How can I come up with some hard numbers to show Broadstripe how crappy their service is?

    Read the article

  • RTL8188CE doesn't connect to any wifi access points

    - by Drakmail
    I'm using network manager to connect. Also, tryed iwconfig. Results are same. I even try to connect to open access point — results are same. More information: Drakmail@thinkpad-x220:~$ lspci | grep Network | grep -v Ethernet 03:00.0 Network controller: Realtek Semiconductor Co., Ltd. RTL8188CE 802.11b/g/n WiFi Adapter (rev 01) Drakmail@thinkpad-x220:~$ uname -a Linux thinkpad-x220 3.1.0 #1 SMP PREEMPT Wed Oct 26 02:19:49 UTC 2011 x86_64 Intel(R) Core(TM) i5-2410M CPU @ 2.30GHz GenuineIntel GNU/Linux Drakmail@thinkpad-x220:~$ dmesg | tail -n 10 [ 846.901574] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 906.812461] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 966.728810] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 1026.639676] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 1030.925574] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin At this moment I try to connect to open wifi ap: [ 1031.252403] wlan0: direct probe to 00:24:8c:55:fa:ed (try 1/3) [ 1031.451943] wlan0: direct probe to 00:24:8c:55:fa:ed (try 2/3) [ 1031.651658] wlan0: direct probe to 00:24:8c:55:fa:ed (try 3/3) [ 1031.851354] wlan0: direct probe to 00:24:8c:55:fa:ed timed out [ 1086.544960] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin My distribution: Drakmail@thinkpad-x220:~$ cat /etc/*version AgiliaLinux release 8.0.0 (Sammy) (Something between Slackware and Archlinux). Also, I saw that wifi module to often trying to load a firmware file. Any ideas what it would be?

    Read the article

  • Windows Server 2008 R2 creating a multi-year client certificate using the IIS certsrv page while deploying SSTP VPN

    - by Warren P
    I am trying to follow instructions on Technet about deploying a Standard (non-enterprise) SSTP based VPN) that were originally written for Server 2008, but I am using Server 2008 R2, I have gotten as far as the part where it asks you to create a request a Server Authentication certificate. I have deployed IIS, and Active Directory Certificate Services, and chose "Standalone" and "Standard" (non-enterprise) Certificate Authority because I don't have an OID and don't think I should have to get one for a simple deployment of SSTP. The resulting certificates made by the Certification Authority "Issue" command, only have a 1 year period of validity, I want a multi-year certificate. At no point in this process is there any way to input this information unless it's through the Attributes text input area on the Advance Certificate Request page, which appears to be generated using an old ActiveX control, which means I can only do this using the workarounds in the article that I linked at the top, and only using Internet Explorer. Update:: It may be that this question is pointless since self-signed keys do not appear to work, when I try them, using Windows 8 as the VPN client. The problem is that the keys that are self-created by the technique shown here do not have any Certificate Revocation Server URLs and so you get an error "The revocation function was unable to check revocation", and the VPN connection fails.

    Read the article

  • Are there any other causes of this error that are NOT related to initial setup?

    - by LordScree
    I'm trying to diagnose an issue at a customer site. They are receiving the following error: A network-related or instance-specific error occurred while establishing a connection to SQL Server I've seen this a few times, but only during the initial setup - it's often caused by one of the following: The database server is turned off The network connection between the database server and the application is closed or somehow blocked (e.g. a firewall) The SQL Server instance is not set up to receive remote connections from the application server (e.g. TCP is turned off, remote connections are disabled, or the "SQL Server Browser" service is stopped/disabled) However, if I assume that no configuration changes have been made, I'm trying to postulate on what the reason might be for getting this error at a random point after the initial setup. My initial thought is: SQL Server machine has run out of resources (e.g. RAM) and is unable to accept new requests from the application server Is this a valid theory? What other possible causes are there of this error that are not related to the initial setup of the server / application connection? Or is it simply impossible that this error could occur without a configuration change having been made (either on the SQL Server side, application side, or somewhere in-between (network))? NOTE: I believe this question differs from the plethora of questions related to this error message because the application and server have been talking to each other quite happily until now (most, if not all, other questions seem to relate to initial setup).

    Read the article

  • How to fix a damaged/corrupted NTFS filesystem/partition without losing the data on it?

    - by Gareth
    I was going to install Fedora 15 along side my Windows 7 Starter on my Acer Apire One D255E and at some point during the resizing of the NTFS partition (the one with Windows on it) the setup failed. Now I cannot access this partition from any OS. When I tried to access it from a Fedora install running on a USB flashdrive I get this error: Error mounting: mount exited with exit code 12: Failed to read last sector (452534271): Invalid argument HINTS: Either the volume is a RAID/LDM but it wasn't setup yet, or it was not setup correctly (e.g. by not using mdadm --build ...), or a wrong device is tried to be mounted, or the partition table is corrupt (partition is smaller than NTFS), or the NTFS boot sector is corrupt (NTFS size is not valid). Failed to mount '/dev/sda5': Invalid argument The device '/dev/sda5' doesn't seem to have a valid NTFS. Maybe the wrong device is used? Or the whole disk instead of a partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around? It doesn't make a lot of sense to me but I was really hoping it would to someone and they can give me a way to restore the partition without losing everything on it (I have a lot of important notes from various classes on there)? Cheers.

    Read the article

  • OpenVZ container is running but does not show in vzlist nor can I find the private/conf files for the container

    - by Kakeakeai
    I was creating a new OpenVZ container on one of our VPS Nodes while the power went out for that machine. After bringing the machine back online I could no longer access the container CTID=101. I could not destroy it using "vzctl destroy 101", I can not enter or control it, and "vzlist -a" does NOT display any containers at all (this was a fresh node and the first container was being created). I decided to create a new container at this point assuming that the old container just was not saved for some reason. However when I go to add the ip/host to the new container I get a warning that the IP is already in use. After doing a ping to the IP I realized there was a machine on that IP. I SSH into the machine and discover it is the OLD container that some how is orphaned. I can not find it on the filesystem, I can not find it using VZ commands, and It is set to start on Node boot so it is impossible to shutdown (even ssh in and typing the "shutdown now" command just reboots the container not shut it down). Is this a flaw in OpenVZ or am I missing something? I have all the outputs and logs if needed. Thank you all so much in advance.

    Read the article

  • Apache, Permissions, and Convenience

    - by Mike
    I'm on Mac OSX and i I have apache2 installed via MacPorts, running as the _www user. I have some files I want to serve in the /Users/Me/Documents/abc folder. Right now, though, the permissions of /Users/Me/Documents are 700. So, _www can't get in, even if abc is chmod 777. I recognize the following options: Allow _www access to my Documents folder. Put the files I want to share outside of my Documents folder. Hard-link the files outside of my Documents folder, and point apache to the hard links. None of these solutions are acceptable to me, however. I don't feel safe allowing _www access to my entire Documents folder. I really want to keep the files in my Documents folder for other reasons. The files are changing all the time, so hard-linking would not always reflect the right file structure, and, as I understand it, you can't hard-link a directory (though, if you could, that would solve it). Any ideas for a solution? Is there a way to run a few httpd processes as my user account so it can get in there? Or, is there some way to hard-link a directory, or some way to get httpd to follow a symlink past a directory that is 700 not owned by _www? Thanks!

    Read the article

  • How do I setup an Alias on Apache with XAMPP on Linux ? (Permission problem)

    - by knarf
    XAMPP works fine but I want to have http://localhost/f to point to /home/knarf/prog/php/fwyxz. I've chmod -R 777 /home/knarf/prog/php/fwyxz I've added Alias /f /home/knarf/prog/php/fwyxz at the end of the httpd.conf And when I try to access it, I get a 403. From the apache error_log: [error] [client 127.0.0.1] (13)Permission denied: access to /f denied. I've already tried several solutions (userdir and symlinks) but they both failed with the same error. I've also tried to add this after the Alias: <Directory "/home/knarf/prog/php/fwyxz"> Order allow,deny Allow from all </Directory> But again, permission denied. Now if I change the User/Group under which apache runs from nobody to knarf, it seems to work (static files are ok) but PHP can't use/initialize sessions : [error] [client 127.0.0.1] PHP Warning: session_start() [function.session-start]: open(/tmp/sess_r5nrmu4ugqguqqe83rs53lq6k0, O_RDWR) failed: Permission denied (13) in /home/knarf/prog/php/fwyxz/index.php on line 3 [error] [client 127.0.0.1] PHP Warning: Unknown: open(/tmp/sess_r5nrmu4ugqguqqe83rs53lq6k0, O_RDWR) failed: Permission denied (13) in Unknown on line 0 [error] [client 127.0.0.1] PHP Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct () in Unknown on line 0 This is really frustrating.

    Read the article

  • Unable to mount iso as dvd drive in VMware Fusion 6

    - by John O
    I have a newer iMac without an optical drive and some DVDs that I needed to run software off of. This software will have you juggle discs to read data off of them, and the data can't simply be copied to the machine's drive. I used a windows machine to make ISOs of these DVDs. And the first disc, the installer, it will mount in VMware and let you install the software. It then asks for the other discs, and these won't mount as ISOs. If I mount them as drive images, they'll show up on the iMac's and I have access to all the files. But if I try to mount them as the dvd drive through Fusion, nothing happens. For that matter, I'm unable to attempt it a second time, as Fusion believes that the there is already an iso mounted as the dvd, while nothing has happened as far as the guest OS is concerned. drutil eject will allow me to eject the ghost/non-existent dvd, at which point I can make a second (equally futile) attempt. Does anyone have an explanation for this behavior? How can the ISO be valid enough to mount as a drive image, but not valid enough for Fusion to mount it as if it were a dvd?

    Read the article

  • Using Windows as a gateway to the internet

    - by James Wright
    My customer currently blocks outbound RDP and SSH, which means that none of their employees can get access to external Windows and Linux boxes (at the console level). However, a need has recently arisen to give access to an assortment of RDP and SSH endpoints scattered throughout the internet. The endpoint IP addresses are a moving target, and an access list exists to define what those IP addresses are. So now my customer wants to have a single Windows Server that they control as the sole outbound point for RDP/SSH to the internet. Consider it a jump box to the internet. If one of our admins have an access to this Windows box then they can log on, and from there bounce around to RDP/SSH endpoints on the internet. Is a standard Windows 2008 box going to work as a jump box? For example, I seem to recall that Win2k8 limits the number of users that can log on simultaneously, which means that the jump box may not be accessible if lots of users are on it. Advice as to how to make this work..?

    Read the article

  • Internet compression proxy for low speed broadband?

    - by user23150
    I live in a rural location, using high-latency wireless off a local ISP's tower. My speed tests vary day to day, but I can get around 1Mb up/down. The problem is, I work with large files, uploading and downloading (HD videos, development software, etc.). It can be painful to wait sometimes. Plus I do some side contract game development, and it can be very difficult to playtest with other developers (200ms ping is a good day for me). Now, obviously it's not going to be easy to solve the latency problem without different wireless hardware. But speedwise, I am wondering if I can use some kind of compression technology on a proxy. For instance, my work computer has full access to a 26Mb down, 10Mb up connection, that is totally unused at night and the weekends. If I could run some kind of compression technology on our server, and use it as a proxy to route to my home computer, I could stand to gain some major speed. I realize that by bogging down a system with compression, I could potentially lose whatever speed gain I had. But the proxy server is a quad core xeon, and the receiving computer is a pretty decent i7 computer, so that shouldn't be a concern. I found http://toonel.net/ but it seems more geared toward very slow narrowband users, like dial-up. Plus, I would prefer to just be able to point my browser to a proxy server, rather then install software on my client machine. EDIT I thought about my question a little more, and realize I am going to need to install software on my client in order to decompress, and possible compress (for uploading). That's not a huge deal.

    Read the article

  • Updating files with a Perforce trigger before submit [migrated]

    - by phantom-99w
    I understand that this question has, in essence, already been asked, but that question did not have an unequivocal answer, so please bear with me. Background: In my company, we use Perforce submission numbers as part of our versioning. Regardless of whether this is a correct method or not, that is how things are. Currently, many developers do separate submissions for code and documentation: first the code and then the documentation to update the client-facing docs with what the new version numbers should be. I would like to streamline this process. My thoughts are as follows: create a Perforce trigger (which runs on the server side) which scans the submitted documentation files (such as .txt) for a unique term (such as #####PERFORCE##CHANGELIST##NUMBER###ROFL###LOL###WHATEVER#####) and then replaces it with the value of what the change list would be when submitted. I already know how to determine this value. What I cannot figure out, is how or where to update the files. I have already determined that using the change-content trigger (whether possible or not), which "fire[s] after changelist creation and file transfer, but prior to committing the submit to the database", is the way to go. At this point the files need to exist somewhere on the server. How do I determine the (temporary?) location of these files from within, say, a Python script so that I can update or sed to replace the placeholder value with the intended value? The online documentation for Perforce which I have found so far have not been very explicit on whether this is possible or how the mechanics of a submission at this stage would work.

    Read the article

  • AT&T Filtering FTP traffic?

    - by xpda
    Using an AT&T DSL, I cannot ftp upload or ftp download a few files of a large 1500 set. The problem is the file name. I can change a few characters of the file name, and they upload fine. I can change the file names from upper to lower case and they upload fine. If I change back to the original file name, it will not upload again. When it doesn't upload, it starts, transfers about 5% of a 5-10 meg file, and then times out. I have uploaded one of the files under a different name, changed the name back to the original, and it will not download via ftp. It will download onto a browser, and it will ftp download just fine with a different name. It just will not download with ftp. I have reproduced this uploading to three different servers on 1and1 and Amazon EC2. When I try it on a non-AT&T ISP client, it works OK. Here is a file that did not upload until I had renamed it. (I have changed it back to the original name): "http://xpda.com/nautnew/11302 STOVER POINT TO PORT BROWNSVILLE SIDE A.png" This problem is unrelated to connection, speed, and file content. Only things I can see that makes a difference are the file name and ATT DSL. Does ATT have some kind of ftp file filtering? Is there anything else that could cause this behavior?

    Read the article

  • Expire Files In A Folder: Delete Files After x Days

    - by Brett G
    I'm looking to make a "Drop Folder" in a windows shared drive that is accessible to everyone. I'd like files to be deleted automagically if they sit in the folder for more than X days. However, it seems like all methods I've found to do this, use the last modified date, last access time, or creation date of a file. I'm trying to make this a folder that a user can drop files in to share with somebody. If someone copies or moves files into here, I'd like the clock to start ticking at this point. However, the last modified date and creation date of a file will not be updated unless someone actually modifies the file. The last access time is updated too frequently... it seems that just opening a directory in windows explorer will update the last access time. Anyone know of a solution to this? I'm thinking that cataloging the hash of files on a daily basis and then expiring files based on hashes older than a certain date might be a solution.... but taking hashes of files can be time consuming. Any ideas would be greatly appreciated! Note: I've already looked at quite a lot of answers on here... looked into File Server Resource Monitor, powershell scripts, batch scripts, etc. They still use the last access time, last modified time or creation time... which, as described, do not fit the above needs.

    Read the article

  • How can I make AdobePS recognize my printer in Win98se?

    - by Jeff
    I am trying to use the adobe product: "PostScript Printer Driver AdobePS 4.2.6 for Windows 95 and Windows 98" to connect a new HP laser printer (CP1025nw) via USB to a machine running win98SE. I am using the PPD file for "HP color LaserJet PS". The Adobe utility installs the printer, and it shows up on the printer list ... but it doesn't print. I get an error message (sometimes) that an error occurred writing to USB001.. Evidently, AdobePS cannot "See" the printer. I suspect there is a problem wrt the printer port. Nothing I pick works. I have two virtual printer ports, USB001 and USB002 which were created during the installation of an HP inkjet printer. The inkjet will work on either of these, but the laser printer works on neither. When I connect the laserjet printer, the windows New Hardware Wizard activates, but I don't see how to point it at the AdobePS Postscript driver. HP has no support whatsoever for win98, so there's no hope of getting help from them. The printer works great with the several winXP computers connected through either the network connection or the USB. So, how do I convince Win98se to use the adobe postscript printer driver? Is this printer even a postscript printer? Is there a better way to get win98se to talk to this printer? Any suggestions?

    Read the article

  • Limited user requires admin rights for plug and play printer?

    - by Kalamane
    I have a small fleet of laptops that aren't part of a domain running Windows XP Pro SP3 as limited users. They are used for printing different shipping documents. I have a script that runs when they start up that uses devcon and prntmngr to detect and install/configure the currently connected usb printers. This lets us deploy the laptops to any printing station with a USB printer and have the printer 'just work' for the end user employee. I've taken the original clone image and have added functionality to it. Since then I've discovered a bit of an issue with using HP LaserJet P1606dn printers. They have started asking for admin rights on setup. This is with and without the script running. Previously they would automatically install because I had installed WHQL plug and play drivers for them. I thought it might have to do with the HP Smart Install Utility but it happens when that is disabled. I don't have a good point to roll back to before this started happening because this was an issue on the image I took initially to start this upgrade. What could be causing this?

    Read the article

  • Verification of downloaded package with rpm

    - by moooeeeep
    I wanted to install a package on CentOS 6 via rpm (e.g., the current epel-release). EDIT: Of course I would always prefer the installation via yum but somehow I failed to get that specific package installed using this normal approach. As such, the EPEL/FAQ recommends Version 2. As I'm downloading the package through an insecure channel (http) I wanted to make sure that the integrity of the file is verified using information that is not provided with the downloaded file itself. Is this especially true for all of these approaches? I've seen various approaches to this on the internet: Version 1 rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-7.noarch.rpm Version 2 rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-7.noarch.rpm Version 3 wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-7.noarch.rpm rpm --import https://fedoraproject.org/static/0608B895.txt rpm -K epel-release-6-7.noarch.rpm rpm -i epel-release-6-7.noarch.rpm I do not know rpm very well, so I wondered how they might differ? My guess (after reading the manpage) is that the first should only be used when the package is previously not installed, the second would additionally remove previous versions of the package after installation, the first two omit some verification steps before the actual installation that are done by rpm -K. So my main questions at this point are Are my guesses correct or am I missing something? Is the rpm --import ... implicitly done for the first two approaches as well, and if not, isn't it necessary to do so after all? Are these additional checks performed by rpm -K ... any relevant? What is the best (most secure, most reliable, most maintainable, ...) way of installing packages via rpm in general?

    Read the article

  • Scaling a node.js application, nginx as a base server, but varnish or redis for caching?

    - by AntelopeSalad
    I'm not close to being well versed in using nginx or varnish but this is my setup at the moment. I have a node.js server running which is serving either json, html templates, or socket.io events. Then I have nginx running in front of node which is serving all static content (css, js, etc.). At this point I would like to cache both static content and dynamic content to memory. It's to my understanding that varnish can cache static content quite well and it wouldn't require touching my application code. I also think it's capable of caching dynamic content too but there cannot be any cookie headers? I do use redis at the moment for holding session data and planned to use it for other things in the future like keeping track of non-crucial but fun stats. I just have no idea how I should handle caching everything on the site. I think it comes down to these options but there might be more: Throw varnish in front of nginx and let varnish cache static pages, no app code changes. Redis would cache dynamic db calls which would require modifying my app code. Ignore using varnish completely and let redis handle caching everything, then use one of the nginx-redis modules. I'm not sure if this would require a lot of app code changes (for the static files). I'm not having any luck finding benchmarks that compare nginx+varnish vs nginx+redis and I'm too inexperienced to bench it myself (high chances of my configs being awful). I'm basically looking for the solution that would be the most efficient in terms of req/sec and scalable in the future (throw new hardware at the problem + maybe adjust some values in a config = new servers up and running semi-painlessly).

    Read the article

  • Xamp on ubuntu serves php source for root url only

    - by mazaryk
    Hey, Okay, so installed xamp on my ubuntu machine, started it up and everything worked. Apache ran my php app just fine (including requests to the root url "/"). However, after the first reboot since installing, when I request "http://localhost/" apache serves up the index php page as a phtml source file. All other urls (like "http://localhost/login") work as expected. Backgound: The only modification I made to xamp was to setup a vhost for my app. The app uses an .htaccess file where I define some rewrite rules (the app is an MVC framework and all urls are rewritten to a single entry point php file). I'm using Xamp because I need php = 5.3.0. I know apache will serve up the source of a php file when it doesn't know to process php files. But the config does indeed have "AddType application/x-httpd-php .php" and as I said, the app works for all urls except the root "/" (and only since I've rebooted). The .htaccess file does contain a DirectoryIndex directive. xamp 1.3.7a Ubuntu 9.10 Any ideas?

    Read the article

  • Using an environment variable set to a path value: the system cannot find the path specified for %OPENCV_DIR%

    - by dumbledad
    I'm trying to set an environment variable to point to the directory into which I have extracted the latest version of OpenCV, following the instructions in OpenCV's Installation in Windows tutorial. Here's my elevated command line listing. C:\>cd C:\OpenCV2.4.6\build\x64\vc11 C:\OpenCV2.4.6\build\x64\vc11>cd ../../../.. C:\>setx -m OPENCV_DIR C:\OpenCV2.4.6\build\x64\vc11 SUCCESS: Specified value was saved. C:\>cd %OPENCV_DIR% The system cannot find the path specified. C:\>echo %OPENCV_DIR% %OPENCV_DIR% Firstly I change directory to C:\OpenCV2.4.6\build\x64\vc11 to ensure that it exists. After that is successful I change directory back to the root of the C drive. Then I use setx to make OPENCV_DIR a system wide environment variable with value the C:\OpenCV2.4.6\build\x64\vc11 path I verified in step 1. Noting the success of setx in the previous step I now change directory using the new environment variable. But it fails with the message The system cannot find the path specified. If I try to echo the value of the OPENCV_DIR environment variable it appears not to be set. Looking in the control panel the OPENCV_DIR environment variable looks correctly set: What's wrong? Why is the variable not working? Am I evoking it incorrectly when I use it to change directory or echo its value?

    Read the article

  • Developing high-performance and scalable zend framework website

    - by Daniel
    We are going to develop an ads website like http://www.gumtree.com/ (it will not be like this one but just to give you an ideea) and we are having some issues regarding performance and scalability. We are planning on using Zend Framework for this project but this is all that I'm sure off at this point. I don't think a classic approch like Zend Framework (PHP) + MySQL + Memcache + jQuery (and I would throw Doctrine 2 in there to) will fix result in a high-performance application. I was thinking on making this a RESTful application (with Zend Framework) + NGINX (or maybe MongoDB) + Memcache (or eAccelerator -- I understand this will create problems with scalability on multiple servers) + jQuery, a CDN for static content, a server for images and a scalable server for the requests and the rest. My questions are: - What do you think about my approch? - What solutions would you recommand in terms of servers approch (MySQL, NGINX, MongoDB or pgsql) for a scalable application expected to have a lot of traffic using PHP?...I would be interested in your approch. Note: I'm a Zend Framework developer and don't have to much experience with the servers part (to determin what would be best solution for my scalable application)

    Read the article

  • Scripting around the lack of user:password@domain url functionality in jscript/IE

    - by Idiomatic
    I currently have a jscript that runs a php script on a server for me, dead simple. But... I want to be atleast somewhat secure so I setup a login. Now if I use the regular user:password@domain system it won't work (IE decided it was a security issue). And if I let IE just remember the password then it pops up a security message confirming my login every time (which kills the point of the button). So I need a way to make the security message go away. I could lower security settings, which tbh I am fine with but nothing seems to make it fuck off (there might be some registry setting to change). Find a fix for jscript that will let me use a password in the url. There used to be a regedit that worked for older systems which allowed IE to use url passwords (not working on my 64bit windows7 setup) though I doubt that'd have helped jscript anyways (since it outright crashes). Use an app other than IE. Inwhich case I'm not sure how to go about it, I want it to be responsive and invisible so IE was a good choice. It is near instant. Use XMLHttpRequest instead of IE directly? May even be faster but I've no idea if it'd help or just have the same error. Use a completely different approach. Maybe some app that can script website browsing. var args = {}; var objIEA = new ActiveXObject("InternetExplorer.Application"); if( WScript.Arguments.Item(0) == "pause" ){ objIEA.navigate("http://domain/index.html?pause"); } if( WScript.Arguments.Item(0) == "next" ){ objIEA.navigate("http://domain/index.html?next"); } objIEA.visible = false; while(objIEA.readyState != 4) {} objIEA.quit();

    Read the article

  • Printer deployment via Group Policy not working on a single system

    - by Aron Rotteveel
    One of my coworkers just got a new laptop running Windows 7 Pro x64. We use a GPO to deploy the printers to every system, but for some reason it is not working on this system. I have been breaking my head over this for the past 3 hours now without any result. The strange thing is that gpresult /H seems to indicate that the GPO did run. The hardware: Laptop: Windows 7 Professional x64 Print server: Windows Server 2008 x64 R1 HP Color LaserJet 2605dn HP LaserJet P2015 Driver packages on server: HP universal printer driver PCL5, both X86 as X64 Oddities and other info: GPO working flawlessly on every other system, including my own Windows 7 Ultimate X64 laptop gpresult /H shows the GPO being ran Windows Firewall completely disabled on the new laptop Below is the output for gpresult /H (in Dutch sadly, but I think you'll recognize it): Beleidsregels Windows-instellingen Printerverbindingen Pad Dominerend groepsbeleidsobject \\Server2008\HP Color LaserJet 2605dn Printers \\Server2008\HP LaserJet P2015 Printers Beheersjablonen Beleidsdefinities (ADMX-bestanden) opgehaald van de lokale computer. Configuratiescherm/Printers Beleid Instelling Dominerend groepsbeleidsobject Beperkingen van point-and-print Uitgeschakeld Printers Like I said, I have been trying to figure this out for the past few hours or so without any result, so you are my last hope. Any help is appreciated.

    Read the article

  • Windows 7 black screen after boot animation

    - by t_virus
    This is happening with my windows 7 64-bit install frequently. Today i was trying to install an icon pack. There was already a restore point prior to it so i didn't feel like create one. I replaced the following system files in both system32 and sysWOW64 imageres.dll imagesp1.dll zipfldr.dll and shell32.dll After a reboot i see windows boot animation and then a black screen (with no mouse) I let it like that for 10 minutes and the laptop automatically went to sleep mode. I wake it up and see an error box this error "interactive logon failed to initialize" I close it and i'm left with a black screen again, this time with a mouse though. I tried using automatic system repair. It just tells me to remove any extra hardware (i have none attached atm) System restore finishes successfully but doesn't fix the problem. Safe mode doesn't work either. same black screen after boot animation Can someone help me with this? Also, I'll be grateful anyone can upload those default files (listed above) for both system32 and sysWOW64 from a windows 7 64bit install (no service pack)

    Read the article

  • Formatting not retained in paste from Ditto (clipboard manager). Plain text pasted instead [Solved] Add supported types

    - by Jeff Kang
    I'm trying to use Ditto on the Ditto documentation. If I were to copy the table of contents, then paste it (without Ditto) to the word processor, I get http://i.imgur.com/V1GU3.png, and the formatting is maintained. As as a result of the copy operation, the table of contents also goes into the Saved Items List (= History List = Lists the Clips saved from the Clipboard) in Ditto’s Main Window: I open a blank document to paste from Ditto instead of the default clipboard, and press either Ctrl-`, the default Ditto window activation Global Hot Key, or click the tray icon. From this point, I can do 3 things to close the Ditto window, and place the item on the clipboard (the default clipboard?). Select the item, and press Enter Put the cursor on the item, and double-click Select the item, and press Ctrl-c 1) and 2) send a right-click where the cursor is, after the Ditto window closes (presumably to have the paste option ready to access?): Ctrl-c just closes the Ditto window. Whichever method is used, the contents are pasted in what I believe is plain text: http://i.imgur.com/mQAZH.png How do I keep the formatting that the default clipboard keeps? Thanks.

    Read the article

< Previous Page | 657 658 659 660 661 662 663 664 665 666 667 668  | Next Page >