Search Results

Search found 15423 results on 617 pages for 'uses clause'.

Page 294/617 | < Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >

  • Other computer can't connect to MySQL Database

    - by user23950
    I have a VB .NET program the uses a MySQL database. It works when the computer that has WAMP installed is the one running the program. The same program now displays an Unhandled Exception error when the computer it's running on does not have WAMP installed (and running). The only thing that is installed is the MySQL connecter net. How can I make this work? I have already tried opening port 20 by configuring the firewall. I did this for both TCP and UDP.

    Read the article

  • What is the standard way of using Q15 values?

    - by Alex
    To process 8-bit pixels, to do things like gamma correction without losing information, we normally upsample the values, work in 16 bits or whatever, and then downsample them to 8 bits. Now, this is a somewhat new area for me, so please excuse incorrect terminology etc. For my needs I have chosen to work in "non-standard" Q15, where I only use the upper half of the range (0.0-1.0), and 0x8000 represents 1.0 instead of -1.0. This makes it much easier to calculate things in C. But I ran into a problem with SSSE3. It has the PMULHRSW instruction which multiplies Q15 numbers, but it uses the "standard" range of Q15 is [-1,1-2?¹5], so multplying (my) 0x8000 (1.0) by 0x4000 (0.5) gives 0xC000 (-0.5), because it thinks 0x8000 is -1. This is quite annoying. What am I doing wrong? Should I keep my pixel values in the 0000-7FFF range? This kind of defeats the purpose of it being a fixed-point format. Is there a way around this? Maybe some trick? Is there some kind of definitive treatise on Q15 which discusses all this?

    Read the article

  • Unable to open websites that use HTTPS on linux

    - by negai
    I have the following network configuration: My PC 192.168.1.20/24 uses 192.168.1.1/24 as a gateway. Dlink-2760U router with Local address 192.168.1.1/24 has a VPN connection open with the provider using PPTP. Whenever I'm trying to open some web-sites that has some authorization (e.g. gmail.com, coursera.org), I'm getting a request timeout. This problem is observed mostly on linux (Ubuntu 12.04 and Debian 6.0), while most of such websites work correctly on windows XP. Could you please help me diagnose the problem? Could it be related to NAT + HTTPS? Thanks

    Read the article

  • Subdomain on a separate server (Windows/IIS) won't load default web site?

    - by DOTang
    I've got a domain hosted under godaddy.com. I set it up so that subdomain.mysite.com points to a different server which uses Windows/IIS (say for example ip 1.1.1.1). When I go to the subdomain I get no errors, but no webpage loads. In IIS under the default website I added a bindings for subdomain.mysite.com with the ip of 1.1.1.1, but still nothing loads, just a totally blank page. I know for sure the subdomain host is working correctly because when I ping my subdomain, the correct IP shows. What am I missing to get this working?

    Read the article

  • Is there a (free or commercial) print server which print PDFs from networks?

    - by Eonil
    I'm working in office which uses Windows server for printing. Because our printer supports only Windows driver. But here are Mac OS X also which requires network printing... I'm sure there is no driver of the printer for Mac. So I figured out an idea to do this. On the Mac, a virtual printer driver generates and sends PDF file to print server. Print server, prints PDF files with it's local printer. Is there a solution can do this? (free or commercial)

    Read the article

  • What can I do in order to inform users of potential errors in my software in order to minimize liability?

    - by phobitor
    I'm an independent software developer that's spent the last few months creating software for viewing and searching map data. The software has some navigation functionality as well (mapping, directions,etc). The eventual goal is to sell it in mobile app markets. I use OpenStreetMap as my data source. I'm concerned about liability for erroneous map data / routing instructions, etc that might result when someone uses the application. There are a lot of stories on the internet where someone gets into an accident or gets stuck or gets lost because of their GPS unit/Google Maps/mapping app... I myself have come across incorrect map data as well in a GPS unit I have in my car. While I try to make my own software as bug free as possible, no software is truly bug free. And moving beyond what I can control, OpenStreetMap data (and street map data in general) is prone to errors as well. What steps can I take to clearly inform the user that results from the software aren't always perfect, and to minimize my liability?

    Read the article

  • Determining whether two fast moving objects should be submitted for a collision check

    - by dreta
    I have a basic 2D physics engine running. It's pretty much a particle engine, just uses basic shapes like AABBs and circles, so no rotation is possible. I have CCD implemented that can give accurate TOI for two fast moving objects and everything is working smoothly. My issue now is that i can't figure out how to determine whether two fast moving objects should even be checked against each other in the first place. I'm using a quad tree for spacial partitioning and for each fast moving object, i check it against objects in each cell that it passes. This works fine for determining collision with static geometry, but it means that any other fast moving object that could collide with it, but isn't in any of the cells that are checked, is never considered. The only solution to this i can think of is to either have the cells large enough and cross fingers that this is enough, or to implement some sort of a brute force algorithm. Is there a proper way of dealing with this, maybe somebody solved this issue in an efficient manner. Or maybe there's a better way of partitioning space that accounts for this?

    Read the article

  • Apache: Serve http traffic over https

    - by Gatsys
    Using apache. I have a demo of a webapp that usually uses https. However, for the demo, I want all traffic to be on http even if a user hits https. I have added the following entry and it works if you go to http:// AAAA.com:443, but doesn't work if you go to https:// AAAA.com. It gives you this error: SSL received a record that exceeded the maximum permissible length. (Error code: ssl_error_rx_record_too_long) Here is my current setup: <VirtualHost 111.111.111.1:443> ServerName test.AAAA.com DocumentRoot /var/www/AAAA.com </VirtualHost> How do you redirect the https-http without encountering the SSL error. In other words, turn off ssl for https://

    Read the article

  • Debian Squeeze - Monitor outgoing traffic

    - by Sam W.
    I have a small webserver that running on Lighttpd 1.4 which steadily uses 250GB or less bandwidth for the past couple of months. But since May the traffic spikeed to more than triple of what it was. Nothing special was on my site to make its spike like that. When I checked with vnstat I found that 70% of the bandwidth is tx. I suspect I've been hacked and my webserver is becoming some sort of bot. ClamAV comes out with nothing and I already replaced the Joomla installation with a fresh one, early in June. But right now the traffic stayed the same. My question, how can I monitor my server and look what is transmitting all that data out? My need to be done to pinpoint what is the culprit. Can someone please point to the right way to solve this? Thank you.

    Read the article

  • 12.04 doesn't boot anymore after a power failure

    - by Felix
    I'm a Windows user and I have no experience with Linux and Ubuntu. I installed Ubuntu 12.04 on my netbook (Asus 1215B) and everything works fine. Yesterday I ran the "update application" and updated over 120 "things" (I have no idea what exactly). After that I was asked to reboot, and I did. Ubuntu starts again and at the load screen with these 5 dots that normally begin to change color, it freezes. After 20 minutes I took out the battery to try another reboot (yes, not the the best idea), and now nothing happens. I boot from the HDD and I get an Error BOOTMGR is missing. I have important data on the hard drive. Is there an option to get this fixed? Or if not, to at least get the data from the hard drive? Ubuntu 12.04 64-bit Edit: it is ONLY Ubuntu on this Netbook, which uses the whole 500gb HDD as 1 Partition. Filesystem is NTFS. Whole Hardware seems okay. The USB drive which i used to instl the Os was formated in fat32

    Read the article

  • Cannot send email from EC2 instance on port 587

    - by Tahsin Mostafiz
    I have written a mail service for our flask application that uses Celery and RabbitMQ to send emails (using gmail). I have got the celery consumer and producer communicating okay but I cannot get to send send emails. I am getting a socket.error: [Errno 101] Network is unreachable. I think this means that AWS is blocking port 587 - even though in my security group I opened both ports 587 and 25 (inbound and outbound). Any reason why this is happening? Any help will be highly appreciated.

    Read the article

  • Download and locally store all emails from all mailboxes on Office365?

    - by scape
    We have a business that uses Office365 and we want to be able to save all the emails locally. I found a thread on Office365 community pertaining to this and basically it was stated that there is no direct way of accomplishing this. I am curious if anyone has considered this and if there is a good method for storing these emails locally, even if it's some nifty PowerShell programming. All I've come up with is having a master mailbox which can view all mailboxes, and just have it sync and archive locally to the computer. I have not tried this yet, as the storage file sounds like it will be huge, so this does not seem like a fantastic idea and I'm open to any suggestions!

    Read the article

  • Lighter in CPU/Memory Usage: Lubuntu or Xubuntu

    - by Luis Alvarado
    I am looking for an Ubuntu version that consumes less Memory and CPU. I have read both Lubuntu and Xubuntu (The homepages, wikipedia, phoronix and other sites comparing both). But from experience, which one uses less memory and is less CPU intensive. I need to install them in very old hardware and want to persuade the owner of the hardware of the benefits of Ubuntu. in this case I want to install 11.10 or 12.04 when it comes out. How are each behaving in those versions? The 2 PCs I will be installing either Xubuntu or Lubuntu are: Granpa PC: CPU - Pentium 2 450Mhz RAM - 64MB DIMM Video - 16MB Used for - Documents and Internet. No listening to music, no looking at videos. Just using it for document writing. The other old meat: CPU - Pentium 3 550Mhz RAM - 128MB DIMM Video - 16MB Used for - Documents and Internet also but they want.. or maybe they are wishing for it to use it to see movies and listen to music. This one has internet. The other one does not.

    Read the article

  • Architecture for dashboard showing aggregated stats [on hold]

    - by soulnafein
    I'd like to know what are common architectural pattern for the following problem. Web application A has information on sales, users, responsiveness score, etc. Some of this information are computationally intensive and or have a complex business logic (e.g. responsiveness score). I'm building a separate application (B) for internal admin tasks that modifies data in web application A and report on data from web application A. For writing I'm planning to use a restful api. E.g. create a new entity, update entity, etc. In application B I'd like to show some graphs and other aggregate data for the previous 12 months. I'm planning to store the aggregate data for each month in redis. Some data should update more often, e.g every 10 minutes. I can think of 3 ways of doing this. A scheduled task in app B that connects to an api of app A that provides some aggregated data. Then app B stores it in Redis and use that to visualise pages. Cons: it makes complex calculation within a web request, requires lot's of work e.g. api server and client, storing, etc., pros: business logic still lives in app A. A scheduled task in app A that aggregates data in an non-web process and stores it directly in Redis to be accessed by app B. A scheduled task in app A that aggregates data in a non-web process and uses an api in app B to save it. I'd like to know if there is a well known architectural solution to this type of problems and if not what are other pros/cons for the solution I've suggested?

    Read the article

  • Using mod_rewrite for a Virtual Filesystem vs. Real Filesystem

    - by philtune
    I started working in a department that uses a CMS in which the entire "filesystem" is like this: create a named file or folder - this file is given a unique node (ex. 2345) as well as a default "filename" (ex. /WelcomeToOurProductsPage) and apply a template assign one or more aliases to the file for a URL redirect (ex. /home-page-products - can also be accessed by /home-page-products.aspx) A new Rewrite command is written on the .htaccess file for each and every alias Server accesses either /WelcomeToOurProductsPage or /home-page-products and redirects to something like /template.aspx?tmp=2&node=2345 (here I'm guessing what it does - I only have front-end access for now - but I have enough clues to strongly assume) Node 2345 grabs content stored in a SQL Db and applies it to the template. Note: There are no actual files being created on the filesystem. It's entirely virtual. This is probably a very common thing, but since I have never run across this kind of system before two months ago, I wanted to explain it in case it isn't common. I'm not a fan at all of ASP or closed-sourced systems, so it may be that this is common practice for ASP developers. My question, that has taken far too long to ask, is: what are the benefits of this kind of system, as opposed to creating an actual file hierarchy? Are there any drawbacks to having every single file server call redirected? To having the .htaccess file hold rewrite rules for every single alias?

    Read the article

  • is using Hosts for resolving a sql-server more performant?

    - by Ice
    Hi, we have a legacy application which uses a access.mdb with hundreds of ODBC-connected tables on a sql-server. the access.mdb contains nothing else than these odbc-connections. Now we consider to use a virtual sql-servername for these odbc connections and resolve it in the local hosts-file with the ip-address of the real sql-server. Like this we can easy switch between a test-sql-database server and the the server for production in changing one single entry in the hosts. EVERYTHING works fine and now comes the question: Could it be that this is more performant because there is one single point on resolving the sql-server (name or ip-address)? Is there something like a network-cache / DNS-Cache? peace Ice

    Read the article

  • 100% CPU in QuickTime H.264 decoder on Windows on Win7, except when using XP compat. mode

    - by user858518
    I have a Windows program that uses the Apple QuickTime API to play video. On Windows 7, CPU usage is 100% on one core, which I believe is why the playback is choppy. If I turn on XP compatibility mode for this program, the CPU usage is around 20% of one core, and playback is normal. Using a profiling tool called Very Sleepy (http://www.codersnotes.com/sleepy), I was able to narrow down the high CPU usage to a function in the QuickTime H.264 decoder called JVTCompComponentDispatch. I can't imagine why there would be a difference in CPU usage when XP compatibility mode is turned off or on. Any ideas?

    Read the article

  • Outlook '10 hangs often, IMAP sync

    - by user23150
    We have 3 employees using IMAP to sync with their Desktop and Android Phones. Two are using 70% of their accounts storage, another is using 80%. They all have similar counts on folder structure, etc. The employee with the 80% storage is constantly having Outlook freeze on them for up to 5 minutes at a time. I realize this is Outlook connecting and doing activity on the server, but no one else has this problem. In fact, one user with the 70% storage used uses a very slow laptop, and doesn't have freezing issues. The network is the same, the settings are the same - at a loss how to proceed? Obviously "Outlook is a crummy IMAP client" doesn't help management...

    Read the article

  • Outlook hangs during startup by step &ldquo;loading profile&rdquo;

    - by Marko Apfel
    Problem Starting Outlook shows only the splash screen with comment “loading profile”. I could cancel the startup but restarting shows the same. I verified with Task Manager that no hidden outlook process is bother me. Solution Scanpst Normally with Outlook the tool “Microsoft Outlook Inbox Repair Tool” (scanpst.exe) is additionally installed. Some people could access it via Startmenu, but not me. My lovely Launchy found it under "C:\Program Files (x86)\Microsoft Office\Office14\SCANPST.EXE" Scanpst first ask you for the pst file which you would like to scan. I started with the first default offer: C:\Users\…\AppData\Local\Microsoft\Outlook\….ost And this brings up the information, that another application uses this file. Handle To investigate the causer Handle from Sysinternals is your friend in such cases. Start it from an administrative console and pipe the output to a file. handle > c:\temp\handle.txt Now you could open this file with the editor of your choose and search for the blocked file (your pst file). On top of the section you see the application which has a handle to this file opened (SfdcMsO1.exe). Task Manager Kill this application and start Outlook again. And voila – everything starts up fine … by me

    Read the article

  • Make services not start automatically after reboot (as they require access to an encrypted partition)

    - by Binary255
    Hi, I use Ubuntu Server 10.04. I more or less only want the server to be accessible over SSH after a reboot. I will then login and mount the encrypted partition myself, after which I start the services which uses it. How would I go about setting something like that up? (My first idea was to have everything except /boot in an encrypted LVM, but I never got logging in through SSH and mounting the LVM to work. Initramfs was a bit too complicated for me. Otherwise I think this would have been the best solution.)

    Read the article

  • Why do I need to add my application pool identity to the IIS_IUSRS group?

    - by smcolligan
    I'm setting up a .NET v4.0 web application on a Windows 2008 R2/IIS 7.5 server that uses a domain account for the application pool identity. When I access the site, I get the following error: The current identity () does not have write access to 'C:\Windows\Microsoft.NET\Framework\v4.0.30319\Temporary ASP.NET Files' According to this: http://learn.iis.net/page.aspx/140/understanding-built-in-user-and-group-accounts-in-iis/ the identity of the worker process is added to the IIS_IUSRS group when the process starts. This seems to work fine for the existing .NET v2.0 applications I have running on the same server (I have not had to add their domain account application pool identities to IIS_IUSRS group). This does not seem to be the case for the first .NET v4.0 web application I'm setting up. Once I add the identity to the group, everything works fine. I suspect something is not configured correctly that is forcing me to do this. I would like to understand this before rolling out more sites/servers. Thanks in advance for your help...

    Read the article

  • What constitutes "commercial purposes"?

    - by RoboShop
    I'm looking at this license. It says that I can use it for "non-commercial purposes". What does that mean? I see in Stack Exchange, under Network Profile there is that graph that tracks your points across your Stack Exchange accounts. It uses a control called HighCharts which have a paid and Creative Commons licensed version. So would Stack Overflow constitute a commercial site? We don't pay to use this site, but obviously the site makes money from ads, etc. Then again, there's a lot of sites that have ads who won't necessarily make a profit, it may only be subsiding their costs. But even then, you could argue that even if it is only subsiding their costs, a lot of IT companies run at a loss in order to build a big enough customer base. So where is the line here? Is it any website on the internet? Is it any website that has ads? Is it any website that turns over a profit?

    Read the article

  • How to install php cli with pnctl alongside Zend Server

    - by fazy
    I have Zend Server CE 5.6 with PHP 5.2 running on Ubuntu 11.10. Now the need has arisen to run a command line PHP script that uses PHP's pnctl functionality. First of all, I had no PHP command line in my path, so I made a symlink from the Zend one: sudo ln -s /usr/local/zend/bin/php /usr/bin However, when I run my script, I now get this error: PHP Fatal error: Call to undefined function pcntl_fork() The Zend web control panel doesn't offer pnctl in the list of modules, so how do I get this functionality? Is it safe to use apt-get to install PHP directly, to run alongside the Zend instance? If so, how do I make sure I get version 5.2? I guess the following would pull in PHP 5.3: apt-get php5-cli I could probably muddle through but any pointers to help me avoid making a mess would be much appreciated!

    Read the article

  • For a particular domain, how can I cache its JSON responses locally?

    - by Chris
    I'm coding the frontend of a web app that uses XHR to grab JSON data from a 3rd party. The 3rd party service is slow and because of its API design, we need to make a LOT of API requests every time I refresh the page to test some new code. It's making the development loop painful. The requests are GETs, POSTs and PUTs even though I'm pretty sure none of the requests are changing state. I want to go to localhost for the JSON rather than to this 3rd party API - simply to make my development process faster.

    Read the article

  • Is it possible to change User's Home Directorys permission in OSX?

    - by Sosiska
    Most of your staff uses OSX as main operation system. The problem is that recently we were attacked with some odd malware: users are getting zip-file via mail, and when they open this zip file, they execute a binary keylogger malware, that is inside this zipped file. (One click is enough). We have some non-technical limitations and due this limitation we can't configure user's mail servers. But actually we have physical access to their laptops. As far as I know, there is possible to mount user's home directory without "x" (execution) permission in Linux and *BSD. So users can't run some binary file inside home directory. Is it possible to configure OS X so that user can't execute files inside /Users/?

    Read the article

< Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >