Search Results

Search found 29495 results on 1180 pages for 'cross site scripting'.

Page 545/1180 | < Previous Page | 541 542 543 544 545 546 547 548 549 550 551 552  | Next Page >

  • How do photoshop slices and layer comps interact?

    - by Steve314
    I'm interested in using Photoshop (I have CS2) for some user interface design. I was hoping to be able to use slices and layer comps to mark out particular elements, and use Javascript scripting to export multiple graphics files and text descriptions (positions and sizes of slices mainly) that will be used by my program. My problem is that I've never used Photoshop for web design, or otherwise used slices, and I'm not confident that I understand how they interact with layer comps. This is what I believe (and hope) is correct... Manual slices aren't affected by layer comps in any way - they aren't saved as part of a layer comp. The same manual slices will be active irrespective of which layer comp is selected. Layer-based slices aren't directly affected by layer comps, but they are indirectly affected in that the layer comp saves details of layer position and style. Thus selecting a layer comp may move a layer and change its style, affecting the location and size of its layer-based slice, or may effectively disable the slice by hiding the layer. Automatic slices aren't directly affected by layer comps, but are indirectly affected due to changes to the layer-based slices. So, layer based slices (which are my main interest) may move, may change size (to accomodate a style such as a drop shadow), and may be effectively disabled by the layer being hidden. Other details (and all details of manual slices) will remain constant irrespective of which layer comp is active. Is that correct?

    Read the article

  • Padding is invalid and cannot be removed

    - by Ajay
    I have hosted an asp.net 2.0 site. Everyday, i am getting an error "Padding is invalid and cannot be removed" 2-3 times. Backend used is sql server 2005. the site is controlled via plesk 9.2 CP. Pooling is enabled with timeout as 120 Minutes, in IIS. Can it be the reason for this? I have not used any encryption except for the stored passwords(MD5) The error message is : " Base Exception = : Padding is invalid and cannot be removed. - Source : System.Web - TargetSite :Void ThrowError(System.Exception, System.String, System.String, Boolean)Message : Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. " & System Log (Application) says " Event code: 4009 Event message: Viewstate verification failed. Reason: The viewstate supplied failed integrity check. Event detail code: 50203 ViewStateException information: Exception message: Invalid viewstate. Port: 31235 User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1) " I have hosted it in a dedicated server and not in a web farm. Will keeping a static machine key help resolve this issue? Please guide me on this.

    Read the article

  • HTML5 <video> callbacks?

    - by Andrew
    I'm working on a site for a client and they're insistent on using HTML5's video tag as the delivery method for some of their video content. I currently have it up and running with a little help from http://videojs.com/ to handle the Internet Explorer Flash fallback. One thing they've asked me to do is, after the videos finish playing (they're all a different length), fade them out and then fade a picture in place of the video --- think of it like a poster frame after the video. Is this even possible? Can you get the timecode of a currently playing movie via Javascript or some other method? I know Flowplayer (http://flowplayer.org/demos/scripting/grow.html) has an onFinish function, is that the route I should take in lieu of the HTML5 video method? Does the fact that IE users will be getting a Flash player require two separate solutions? Any input would be greatly appreciated. I'm currently using jQuery on the site, so I'd like to keep the solution in that realm if at all possible. Thanks!

    Read the article

  • Truecrypt or default Disk Utility on Mac?

    - by Kaushik Gopal
    Windows by default doesn't come with a password protect folder option (other that Win7 ultimate), so I used to swear by Truecrypt which was great. But I've read in a couple of places that Mac OS X by default has a way of protecting folders using the Default Disk Utility. So my question is which is better, using TrueCrypt on the Mac or just sticking with the default Disk Utils app? Can somebody let me know the advantages of one over the other? A summary from the very helpful answers below: if you're looking for cross-platform usage Truecrypt is the obvious tool of choice if you're looking for convenience, and intend to stick only to the Mac platform, use the default Disk Utils app.

    Read the article

  • Ajax / Internet Explorer Encoding problem

    - by mnml
    Hi, I'm trying to use JQuery's autocomplete plug-in but for some reasons Internet Explorer is not compatible with the other browsers: When there is an accent in the "autocompleted" string it passes it with another encoding. IP - - [20/Apr/2010:15:53:17 +0200] "GET /page.php?var=M\xe9tropole HTTP/1.1" 200 13024 "http://site.com/page.php" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 1.1.4322; .NET CLR 3.0.04506.30; .NET CLR 3.0.04506.648; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729)" IP - - [20/Apr/2010:15:53:31 +0200] "GET /page.php?var=M%C3%A9tropole HTTP/1.1" 200 - "http://site.com/page.php" "Mozilla/5.0 (X11; U; Linux i686; en-US) AppleWebKit/533.2 (KHTML, like Gecko) Chrome/5.0.342.9 Safari/533.2" I would like to know if there is anyway I can still decode those variables to output the same result.

    Read the article

  • Running Hadoop example in psuedo-distributed mode on vm

    - by manas
    I have set-up Hadoop on a OpenSuse 11.2 VM using Virtualbox.I have made the prerequisite configs. I ran this example in the Standalone mode successfully. But in psuedo-distributed mode I get the following error: $./bin/hadoop fs -put conf input 10/04/13 15:56:25 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Protocol not available 10/04/13 15:56:25 INFO hdfs.DFSClient: Abandoning block blk_-8490915989783733314_1003 10/04/13 15:56:31 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Protocol not available 10/04/13 15:56:31 INFO hdfs.DFSClient: Abandoning block blk_-1740343312313498323_1003 10/04/13 15:56:37 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Protocol not available 10/04/13 15:56:37 INFO hdfs.DFSClient: Abandoning block blk_-3566235190507929459_1003 10/04/13 15:56:43 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Protocol not available 10/04/13 15:56:43 INFO hdfs.DFSClient: Abandoning block blk_-1746222418910980888_1003 10/04/13 15:56:49 WARN hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block. at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2845) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288) 10/04/13 15:56:49 WARN hdfs.DFSClient: Error Recovery for block blk_-1746222418910980888_1003 bad datanode[0] nodes == null 10/04/13 15:56:49 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/max/input/core-site.xml" - Aborting... put: Protocol not available 10/04/13 15:56:49 ERROR hdfs.DFSClient: Exception closing file /user/max/input/core-site.xml : java.net.SocketException: Protocol not available java.net.SocketException: Protocol not available at sun.nio.ch.Net.getIntOption0(Native Method) at sun.nio.ch.Net.getIntOption(Net.java:178) at sun.nio.ch.SocketChannelImpl$1.getInt(SocketChannelImpl.java:419) at sun.nio.ch.SocketOptsImpl.getInt(SocketOptsImpl.java:60) at sun.nio.ch.SocketOptsImpl.sendBufferSize(SocketOptsImpl.java:156) at sun.nio.ch.SocketOptsImpl$IP$TCP.sendBufferSize(SocketOptsImpl.java:286) at sun.nio.ch.OptionAdaptor.getSendBufferSize(OptionAdaptor.java:129) at sun.nio.ch.SocketAdaptor.getSendBufferSize(SocketAdaptor.java:328) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.createBlockOutputStream(DFSClient.java:2873) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2826) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288) Any leads will be highly appreciated.

    Read the article

  • Accessing a file on a network drive

    - by Rekreativc
    Hello. Background: I have an application that has to read from files on a network drive (Z:) This works great in my office domain, however it does not work on site (in a different domain). As far as I can tell the domain users and network drives are set in the same way, however I do not have access to users etc in the customers domain. When I couldn't access the network drive I figured I needed a token for a user. This is how I impersionate the user: [DllImport("advapi32.dll", SetLastError = true, CharSet = CharSet.Unicode)] public static extern bool LogonUser(String lpszUsername, String lpszDomain, String lpszPassword, int dwLogonType, int dwLogonProvider, ref IntPtr phToken); ... const string userName = "Razvoj02"; const string pass = "Programer02"; const string domainName = null; const int LOGON32_PROVIDER_DEFAULT = 0; const int LOGON32_LOGON_INTERACTIVE = 2; IntPtr tokenHandle = new IntPtr(0); bool returnValue = LogonUser(userName, domainName, pass, LOGON32_LOGON_INTERACTIVE, LOGON32_PROVIDER_DEFAULT, ref tokenHandle); if (!returnValue) throw new Exception("Logon failed."); WindowsImpersonationContext impersonatedUser = null; try { WindowsIdentity wid = new WindowsIdentity(tokenHandle); impersonatedUser = wid.Impersonate(); } finally { if (impersonatedUser != null) impersonatedUser.Undo(); } Now here is the interesting/weird part. In my network the application can already access the network drive, and if I try to impersonate the active user (exactly the same user, including the same domain) it will not be able to access the network drive. This leaves me helpless since now I have no idea what works and what doesn't, and more to the point, will it work on site? What am I missing?

    Read the article

  • 16TB Volumes and SNMP On Windows

    - by John K
    As volumes larger than 16TB became more common, it was recognized that the 32 bit value used to report disk size and usage within the standard "HOST-RESOURCES" MIB in SNMP was not large enough to report the proper disk size. Net-SNMP seems to have addressed this issue by simply manipulating the value of "AllocationUnits" to maintain a 32 bit value for disk utilization (since total disk size/usage is equal to the 32 bit space value times the allocation unit), to allow for the calculation of a volume larger than 8/16TB. Presuming you don't have any reporting interest in the allocation unit, this seems like a fine solution. https://bugzilla.redhat.com/show_bug.cgi?id=654384 Window's built in SNMP service, however, seems to continue to suffer from this error, simply reporting the modulo of the used/assigned disk space, resulting in inaccurate disk size reporting. Is there a way to enable Windows to correctly report disk usage for volumes over 16TB? We attempted to simply install Net-SNMP 5.5 x64 and disable Windows SNMP service entirely, however this unfortunately did not fix our issue. I've seen people in the Cacti community mention simply scripting out a solution. Unfortunately, we're using Observium for quick and basic systems monitoring. If the issue can't be correct on the Window's side, can Observium be made to report custom MIBs?

    Read the article

  • Append a dynamically changing watermark to a PDF in SharePoint

    - by ccomet
    This is primarily a question of possibilities more than instructions. I'm a programming consultant working on a WSS project site system for my client. We have a document library in which files are uploaded to go through a complex approval process. With multiple stages in this process, we have an extra field which dictates what the current status of the document is. Now, my client has become enamored with the idea of PDF watermarking. He wants the document (which is already a PDF) to be affixed with a watermark corresponding to the current status, such that with each stage of the approval process the watermark will change. One method, the traditional method for PDF watermarking, of accomplishing this is to have one "clean" copy of the document somewhere hidden on the site, and create a new PDF from it that has the watermark at each stage of the approval process. Since the filename will never change, this new PDF can be uploaded continually to a public library, always overwriting the old version and simulating a "dynamically changing watermark". However, in the various stages there will also be people uploading clean copies with corrections and suggestions, nevermind the complex nature of juggling around two libraries and the fact we double the number of files stored. My client and I agree that this is not a practical path to choose. What we would like to do is be able to "modify" the watermark in a PDF, so that we only have to keep one copy of the file. Unfortunately, from what I've seen, in most cases when you make something like a watermark, which in its nature is supposed to be "unmodifyable", you won't be able to edit it later. So, is it possible to have a part of a PDF which cannot be changed by anyone who downloads the file, but can be changed as part of a workflow or other object model process? Thanks in advance!

    Read the article

  • ai: Determining what tests to run to get most useful data

    - by Sai Emrys
    This is for http://cssfingerprint.com I have a system (see about page on site for details) where: I need to output a ranked list, with confidences, of categories that match a particular feature vector the binary feature vectors are a list of site IDs & whether this session detected a hit feature vectors are, for a given categorization, somewhat noisy (sites will decay out of history, and people will visit sites they don't normally visit) categories are a large, non-closed set (user IDs) my total feature space is approximately 50 million items (URLs) for any given test, I can only query approx. 0.2% of that space I can only make the decision of what to query, based on results so far, ~10-30 times, and must do so in <~100ms (though I can take much longer to do post-processing, relevant aggregation, etc) getting the AI's probability ranking of categories based on results so far is mildly expensive; ideally the decision will depend mostly on a few cheap sql queries I have training data that can say authoritatively that any two feature vectors are the same category but not that they are different (people sometimes forget their codes and use new ones, thereby making a new user id) I need an algorithm to determine what features (sites) are most likely to have a high ROI to query (i.e. to better discriminate between plausible-so-far categories [users], and to increase certainty that it's any given one). This needs to take into balance exploitation (test based on prior test data) and exploration (test stuff that's not been tested enough to find out how it performs). There's another question that deals with a priori ranking; this one is specifically about a posteriori ranking based on results gathered so far. Right now, I have little enough data that I can just always test everything that anyone else has ever gotten a hit for, but eventually that won't be the case, at which point this problem will need to be solved. I imagine that this is a fairly standard problem in AI - having a cheap heuristic for what expensive queries to make - but it wasn't covered in my AI class, so I don't actually know whether there's a standard answer. So, relevant reading that's not too math-heavy would be helpful, as well as suggestions for particular algorithms. What's a good way to approach this problem?

    Read the article

  • How to import data to SAP

    - by Mehmet AVSAR
    Hi, As a complete stranger in town of SAP, I want to transfer my own application's (mobile salesforce automation) data to SAP. My application has records of customers, stocks, inventory, invoices (and waybills), cheques, payments, collections, stock transfer data etc. I have an additional database which holds matchings of records. ie. A customer with ID 345 in my application has key 120-035-0223 in SAP. Every record, for sure, has to know it's counterpart, including parameters. After searching Google and SAP help site for a day, I covered that it's going to be a bit more pain than I expected. Especially SAP site does not give even a clue on it. Say I couldn't find. We transferred our data to some other ERP systems, some of which wanted XML files, some other exposed their APIs. My point is, is Sql Server's SSIS an option for me? I hope it is, so I can fight on my own territory. Since client requests would vary a lot, I count flexibility as most important criteria. Also, I want to transfer as much data as I could. Any help is appreciated. Regards,

    Read the article

  • How to mmap the stack for the clone() system call on linux?

    - by Joseph Garvin
    The clone() system call on Linux takes a parameter pointing to the stack for the new created thread to use. The obvious way to do this is to simply malloc some space and pass that, but then you have to be sure you've malloc'd as much stack space as that thread will ever use (hard to predict). I remembered that when using pthreads I didn't have to do this, so I was curious what it did instead. I came across this site which explains, "The best solution, used by the Linux pthreads implementation, is to use mmap to allocate memory, with flags specifying a region of memory which is allocated as it is used. This way, memory is allocated for the stack as it is needed, and a segmentation violation will occur if the system is unable to allocate additional memory." The only context I've ever heard mmap used in is for mapping files into memory, and indeed reading the mmap man page it takes a file descriptor. How can this be used for allocating a stack of dynamic length to give to clone()? Is that site just crazy? ;) In either case, doesn't the kernel need to know how to find a free bunch of memory for a new stack anyway, since that's something it has to do all the time as the user launches new processes? Why does a stack pointer even need to be specified in the first place if the kernel can already figure this out?

    Read the article

  • Throttling outbound API calls generated by a Rails app

    - by Sharpie
    I am not a professional web developer, but I like to wrench on websites as a hobby. Recently, I have been playing with developing a Rails app as a project to help me learn the framework. The goal of my toy app is to harvest data from another service through their API and make it available for me to query using a search function. However, the service I want to pull data from imposes a rate limit on the number of API calls that may be executed per minute. I plan on having my app run a daily update which may generate a burst of API calls that far exceeds the limit provided by the external service. I wish to respect the performance of the external site and so would like to throttle the rate at which my app executes the calls. I have done a little bit of searching and the overwhelming amount of tutorial material and pre-built libraries I have found cover throttling inbound API calls to a web app and I can find little discussion of controlling the flow of outbound calls. Being both an amateur web developer and a rails newbie, it is entirely possible that I have been executing the wrong searches in the wrong places. Therefore my questions are: Is there a nice website out there aggregating Rails tutorials that has material related to throttling outbound API requests? Are there any ruby gems or other libraries that would help me throttle the requests? I have some ideas of how I might go about writing a throttling system using a queue-based worker like DelayedJob or Resque to manage the API calls, but I would rather spend my weekends building the rest of the site if there is a good pre-built solution out there already.

    Read the article

  • Distutils - Where Am I going wrong?

    - by RJBrady
    I wanted to learn how to create python packages, so I visited http://docs.python.org/distutils/index.html. For this exercise I'm using Python 2.6.2 on Windows XP. I followed along with the simple example and created a small test project: person/ setup.py person/ __init__.py person.py My person.py file is simple: class Person(object): def __init__(self, name="", age=0): self.name = name self.age = age def sound_off(self): print "%s %d" % (self.name, self.age) And my setup.py file is: from distutils.core import setup setup(name='person', version='0.1', packages=['person'], ) I ran python setup.py sdist and it created MANIFEST, dist/ and build/. Next I ran python setup.py install and it installed it to my site packages directory. I run the python console and can import the person module, but I cannot import the Person class. >>>import person >>>from person import Person Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: cannot import name Person I checked the files added to site-packages and checked the sys.path in the console, they seem ok. Why can't I import the Person class. Where did I go wrong?

    Read the article

  • What is .tpl files? php, web design

    - by Dan
    Hi guys! A man wants me to redesign a site run in PHP (VideoCMS). But when I asked him to send me the source he has given me *.tpl files instead of *.php. There is some code inside them: {include file='header.tpl' p="article"} <br /> <table width="886" border="0" cellspacing="0" cellpadding="0"> <tr> <td width="150" valign="top"> <div id="reg_box"> <h3 class="captions">{$lang.articles}</h3> <div id="list_cats"> <ul> {$article_categories} </ul> </div> </div> <br /> <div id="reg_box"> <h3 class="captions">{$lang.members}</h3> {if $logged_in == '1'} {include file='loggedin_body.tpl'} {else} {include file='login_body.tpl'} {/if} or {include file='header.tpl' p="index"} {php} $_SESSION['isFair'] = "Yes"; {/php} Question: what's the interpreter of the code? How to redesign this site?

    Read the article

  • Simple .HTACCESS Passing Variables

    - by Willie Murray III
    Let's says I have a classifieds site which has a URL structure similar to the one which I have listed below: Ex. http://takarat.com/openclassifieds/?category=iphone&location=tennessee This is what shows up when I click "tennessee" as the location and then click on "iphone" as the category. Now let's say this site has a search box and I wanted a URL which comes up after using this search box to show up as opposed to the previous URL. Let's say that whenever I search for the word "iphone" while WITHIN the iphone CATEGORY the following link shows up: Iphone SEARCH Ex. http://takarat.com/openclassifieds/?s=iphone&category=iphone&location=tennessee Ok, I want THAT iphone SEARCH to come up whenever anyone clicks Tennessee Iphone..How would I do this [by the way, I want this to be dynamic so that I can use the code for different combinations of locations and products etc..I want the code to use the CATEGORY name to conduct the search basically]? I believe that this will involve "Passing varibles" from the original link... I am new to programming so I may have my terminology wrong. Any help would be appreciated. Thanks in advance. TURN THIS: takarat.com/openclassifieds/?category=iphone[var 1]&location=tennessee[var2] INTO THIS: takarat.com/openclassifieds/?s=iphone[var 1]&category=iphone[var 1]&location=tennessee[var2]

    Read the article

  • PayPal: IPN vs PDT

    - by Tom
    Hi, I'm having some trouble choosing between PayPal's Instant Payment Notification (IPN) and Payment Data Transfer (PDT). Basically, users buy a one-off product on my site, pay on PayPal, and return to my site. I understand how IPN works but I'm now seeing that I might be able to trigger the various actions that take place after a successful purchase more easily with PDT, as the data gets returned there and then (as opposed to needing a separate listener). However, PayPal's PDT documentation contains this cryptic line: "PDT is not meant to be used with credit card or Express Checkout transactions." ... but I can't find anything further whatsoever on the topic. (1) Are credit cards REALLY not meant to be used with PDT? I would like more than a sentence. (2) Does that mean that a user must have/create a PayPal account to pay? (3) Does it mean that if I want to allow users to pay with their PayPal accounts AND/OR with credit cards directly, I must implement IPN? Could anyone who's gone through this kindly shed some light? Thank you.

    Read the article

  • Eclipse Pydev Ctrl-Click (Go to Definition) Doesn't Work OSX

    - by Koobz
    My Pydev set up in OS X is kind of busted. I'm working on a Django project and I find that Ctrl-Click never actually goes to the definitions of any of my objects or functions. I actually have a symlink to Django/django in my workspace so that it's easier to cross reference Django code. My guess is that something is wrong with the builder, but it doesn't throw up any errors. Does anyone have advice here? Different topic: does anyone know of a good way to use Ctrl-Shift-R (Open resource) and filter files by folder? It's not that useful in python projects where you have 20 urls.py showing up.

    Read the article

  • Activate a python virtual environment using activate_this.py in a fabfile on Windows

    - by Rudy Lattae
    I have a Fabric task that needs to access the settings of my Django project. On Windows, I'm unable to install Fabric into the project's virtualenv (issues with Paramiko + pycrypto deps). However, I am able to install Fabric in my system-wide site-packages, no problem. I have installed Django into the project's virtualenv and I am able to use all the " python manage.py" commands easily when I activate the virtualenv with the "VIRTUALENV\Scripts\activate.bat" script. I have a fabric tasks file (fabfile.py) in my project that provides tasks for setup, test, deploy, etc. Some of the tasks in my fabfile need to access the settings of my django project through "from django.conf import settings". Since the only usable Fabric install I have is in my system-wide site-packages, I need to activate the virtualenv within my fabfile so django becomes available. To do this, I use the "activate_this" module of the project's virtualenv in order to have access to the project settings and such. Using "print sys.path" before and after I execute activate_this.py, I can tell the python path changes to point to the virtualenv for the project. However, I still cannot import django.conf.settings. I have been able to successfully do this on *nix (Ubuntu and CentOS) and in Cygwin. Do you use this setup/workflow on Windows? If so Can you help me figure out why this wont work on Windows or provide any tips and tricks to get around this issue? Thanks and Cheers. REF: http://virtualenv.openplans.org/#id9 | Using Virtualenv without bin/python Local development environment: Python 2.5.4 Virtualenv 1.4.6 Fabric 0.9.0 Pip 0.6.1 Django 1.1.1 Windows XP (SP3)

    Read the article

  • Error Log states that I have MySQL connect error, yet script runs fine

    - by rob - not a robber
    Hello All, First, thanks for all the help I've received so far from StackOverflow. I've learned much. Once again, I'm posing a rudimentary question that I've searched on, but cannot find the exact answer to. Here or on PHP.net. It's sort of like what this guy asked, but not exactly: http://stackoverflow.com/questions/288603/mysql-throwing-query-error-yet-finishing-query-just-fine-why So, I saw my errorlog ballooning up when I checked my site directory and opened to notice that a bunch of errors have been recorded since I wrote this new Admin area. I know something is obviously awry with my scripting for the error to be thrown, but the weird thing is, the script actually runs through and pulls all the data I need without breaking. The log contains: PHP Warning: mysql_query() [function.mysql-query]: Access denied for user 'someuser'@'localhost' (using password: NO) in /home/mysite/adminconsole.php on line 15 I don't get that because that very line is where I setup my connection... the exact same way I do it everywhere else on the site with no problem. After that error, I have these thrown at the same time [09-Apr-2010 08:44:18] PHP Warning: mysql_query() [function.mysql-query]: A link to the server could not be established in /home/mysite/adminconsole.php on line 15 [09-Apr-2010 08:44:18] PHP Warning: mysql_fetch_array(): supplied argument is not a valid MySQL result resource in /home/mysite/adminconsole.php on line 16 From what I read in the other guys thread, the problem is the contents of the query maybe? Maybe my query is malformed? Thanks so much for any guidance you can provide. -Rob

    Read the article

  • how to combine widget webapp framework with SEO-friendly CSS and JS files

    - by Ali
    Hi guys, I'm writing a webapp using Zend framework and a homebrew widget system. Every widget has a controller and can choose to render one of many views if it chooses. This really helps us modularize and reconfigure and reuse the widgets anywhere on the site. The Problem is that the views of each widget contain their own JS and CSS code, which leads to very messy HTML code when the whole page is put together. You get pockets of style and script tags everywhere. This is bad for a lot of different reasons as I'm sure you know, but it has a profound effect on our SEO as well. Several solutions that I've been able to come up with: Separate the CSS and JS of every view of every widget into its own file - this has serious drawbacks for load times (many more resources have to be loaded separately) and it makes coding very difficult as now you have to have 3-4 files open just to edit a widget. combine the all the widget CSS into a single file (same with JS) - would also lead to a massive load when someone enters the site, mixes up the CSS and the JS for all widgets so it's harder to keep track of them, and other problems that I'm sure you can think of. Create a system that uses method 1 (separate CSS and JS for every widget), when delivering the page, stitches all CSS and JS together. This obviously needs more processing time and of course the creation of such a system, etc. My Question is what you guys think of these solutions or if there are pre-existing solutions that you know of (or any tech that might help) solve this problem. I really appreciate all of your thoughts and comments!! Thanks guys, Ali

    Read the article

  • Spotlight actually searching every file on "This Mac"

    - by Cawas
    I know of 2 ways to search for any file in your machine using Finder (some say it's Spotlight) and no Terminal. To prevent answers / comments about Terminal, I consider it either for scripting something or as last resource. It's not practical for lots of usages. For instance, if you want to find something to attach to a mail, or embed in iTunes or any other app, you can just drag n' drop one or many of them. Definitely not practical to do under Terminal. There are many cases of use for any, but the focus here is Graphical User Interface. Well, the 2 ways basically are: Press Cmd + Opt + Spacebar and type in your search. Press the + button, select "System files" and "are included". This is so far my preferred way, but I'm not sure it will go through every file. Open Finder, press Cmd + Shift + G and/or select just one folder. Type in your search and select the folder rather than "This Mac". This will bring files not shown in "This Mac" if you select a folder outside of the default scope. Thing is, none of those is really convenient or have the nice presentation from regular Spotlight, which you get from Cmd + Spacebar and just typing. And, as far as I've heard, the default behavior on Spotlight in Tiger was actually being able to find files anywhere. So, is there any way to make the process significantly simpler? Maybe some tweak, configuration or really good Spotlight alternative? I'd rather keep it simple and tweak Spotlight.

    Read the article

  • Codeigniter: Base_url doesn't seem to be working

    - by Dwayne
    I have developed a simple site that fetches tweets from the Twitter public timeline, caches them for 60 seconds and so on. I have recently moved hosts from Hostgator to Mediatemple and my site was previously working fine on Hostgator. My application doesn't use a database connection, nor does it use any form of flat-file database either. Tweets are cached in an XML file stored in the root directory, not that most of that is important. The url helper is being included as can be seen below (this is my helpers line from autoload.php): $autoload['helper'] = array('url'); I have also removed my index.php file from the URL using .htaccess directives and once again this was previously working on Hostgator (see my .htaccess code below): RewriteEngine On RewriteRule ^(application) - [F,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule .* index.php/$0 [PT,L] In my home.php view file which is in the views folder inside of application, I am using the function base_url() which was previously working on Hostgator inside of my view files and appending it a base href value in my header: <base href="<?php echo base_url(); ?>" /> Here is what my base_url value looks like in the config.php file: $config['base_url'] = "http://threetune.com/"; Although what appears to be happening is that the base_url is not to be seen at all. It doesn't appear to be echoing out the value of base_url as it appears to be empty for some reason. What makes things weirder is that I have a link in another view file called 'fetch.php' and for some reason it appears to be stripping out the value (XSS filtering is off): <a href="threetune/show"><img src="assets/images/cookie.jpg" /></a> The threetune/show part is not to be seen and I only see an empty href value like this <a href=""><img src="assets/images/cookie.jpg" /></a> Can anyone possibly see anything wrong that I may have done, some kind of Mediatemple server limitation or PHP.ini flag that needs to be set? Thank you and I hope I was descriptive enough.

    Read the article

  • HttpWebRequest Timeouts After Ten Consecutive Requests

    - by Bob Mc
    I'm writing a web crawler for a specific site. The application is a VB.Net Windows Forms application that is not using multiple threads - each web request is consecutive. However, after ten successful page retrievals every successive request times out. I have reviewed the similar questions already posted here on SO, and have implemented the recommended techniques into my GetPage routine, shown below: Public Function GetPage(ByVal url As String) As String Dim result As String = String.Empty Dim uri As New Uri(url) Dim sp As ServicePoint = ServicePointManager.FindServicePoint(uri) sp.ConnectionLimit = 100 Dim request As HttpWebRequest = WebRequest.Create(uri) request.KeepAlive = False request.Timeout = 15000 Try Using response As HttpWebResponse = DirectCast(request.GetResponse, HttpWebResponse) Using dataStream As Stream = response.GetResponseStream() Using reader As New StreamReader(dataStream) If response.StatusCode <> HttpStatusCode.OK Then Throw New Exception("Got response status code: " + response.StatusCode) End If result = reader.ReadToEnd() End Using End Using response.Close() End Using Catch ex As Exception Dim msg As String = "Error reading page """ & url & """. " & ex.Message Logger.LogMessage(msg, LogOutputLevel.Diagnostics) End Try Return result End Function Have I missed something? Am I not closing or disposing of an object that should be? It seems strange that it always happens after ten consecutive requests. Notes: In the constructor for the class in which this method resides I have the following: ServicePointManager.DefaultConnectionLimit = 100 If I set KeepAlive to true, the timeouts begin after five requests. All the requests are for pages in the same domain. EDIT I added a delay between each web request of between two and seven seconds so that I do not appear to be "hammering" the site or attempting a DOS attack. However, the problem still occurs.

    Read the article

  • MySQL Linked Server and SQL Server 2008 Express Performance

    - by Jeffrey
    Hi All, I am currently trying to setup a MySQL Linked Server via SQL Server 2008 Express. I have tried two methods, creating a DSN using the mySQL 5.1 ODBC driver, and using Cherry Software OLE DB Driver as well. The method that I prefer would be using the ODBC driver, but both run horrendously slow (doing one simple join takes about 5 min). Is there any way I can get better performance? We are trying to cross query between multiple mySQL databases on different servers, and this seems to be method we think would work well. Any comments, suggestions, etc... would be greatly appreciated. Regards, Jeffrey

    Read the article

< Previous Page | 541 542 543 544 545 546 547 548 549 550 551 552  | Next Page >