Search Results

Search found 29495 results on 1180 pages for 'cross site scripting'.

Page 558/1180 | < Previous Page | 554 555 556 557 558 559 560 561 562 563 564 565  | Next Page >

  • Javascript and jQuery (Fancybox) question

    - by songdogtech
    Javascript and jQuery (Fancybox) question I'm using the Javascript function below for Twitter sharing (as well as other services; the function code is simplified to just Twitter for this question) that grabs the to-be-shared page URL and title and it is invoked in the link with onclick. That results in the Twitter share page loading in a pop up browser window, i.e.<img src="/images/twitter_16.png" onclick="share.tw()" /> In order to be consistent with other design aspects of the site, what I'd like to be able to do is have the Twitter share page open not in a standard browser window but in a Fancybox (jQuery) window. Fancybox can load an external page in an iFrame when the img or href link contains a class (in this case class="iframe" ) in the link and in the document ready function in the header. Right now, of course, when I give the iframe class to the link that also has the onclick share.tw(), I get two popups: one browser window popup with the correct Twitter share page loaded, and a Fancybox jQuery popup that shows a site 404. How can I change the function to use Fancybox to present the Twitter share page? Is that a correct way to approach it? Or is there a better way, such as implementing the share function in jQuery, too? Thanks... Javascript share function: var share = { tw:function(title,url) { this.share('http://twitter.com/home?status=##URL##+##TITLE##',title,url); }, share:function(tpl,title,url) { if(!url) url = encodeURIComponent(window.location); if(!title) title = encodeURIComponent(document.title); tpl = tpl.replace("##URL##",url); tpl = tpl.replace("##TITLE##",title); window.open(tpl,"sharewindow"+tpl.substr(6,15),"width=640,height=480"); } }; It is invoked, i.e.: <img src="/images/twitter_16.png" onclick="share.tw()" /> Fancybox function, invoked by adding class="iframe" in the img or href link $(".iframe").fancybox({ 'width' : '100%', 'height' : '100%', 'autoScale' : false, 'transitionIn' : 'none', 'transitionOut' : 'none', 'type' : 'iframe' });

    Read the article

  • ecommerce platform or from scratch? customer specific catalogs and purchase orders

    - by rafi
    I have a possible freelance job in front of me for a distributor who wants product ordering set up but the orders are all P.O.s basically - no actual credit card or paypal transaction. The customer is simply billed and the order archived. Customers will need to login to this site and each customer will have their own custom catalog of a few dozen products which have been setup via a control panel this distributor uses. So there will be a master catalog of over 1,000 products (perhaps browsable but not to be ordered from on the site) but each customer will only be able to order from the products specified for their accounts. I know I can build this from scratch but I figured it's worth looking into what ecommerce platforms would get me a nice head start. Obviously shopping cart, order history, catalog management are concepts that I can reuse but are any of the ecommerce systems out there also capable of handling custom catalogs (maybe as multi-stores?) or transactions billed to accounts without credit card? The more I could reuse the better. I've messed with OSCommerce (way back) and a little Zen Cart more recently. I've also worked on a number of totally custom e-commerce sites. But my knowledge of the open source e-commerce tools is pretty limited and I'm trying to keep the effort as simple as I possibly can on this. I'm pretty flexible on the language of the platform by the way. Thanks in advance.

    Read the article

  • Glassfish: Storing Java classes in the docroot folder?

    - by Tom Marthenal
    I'm very new to using Glassfish or JSP. I have this working in NetBeans (which has Glassfish bundled) but when I try to put it on my server which is running Glassfish Server, I really don't know what I'm doing. I can place a JSP file in "domains/domain1/docroot/index.jsp" and it will work when I visit my site, but I can't, for some reason, get Java classes to work. I copied the files in "/build/web/" from the NetBeans project to the docroot folder on my server. The errors I get when I visit the site are: org.apache.jasper.JasperException: PWC6033: Error in Javac compilation for JSP PWC6199: Generated servlet error: string:///index_jsp.java:7: package test does not exist PWC6197: An error occurred at line: 5 in the jsp file: /index.jsp PWC6199: Generated servlet error: string:///index_jsp.java:52: cannot find symbol symbol : class TestClass location: class org.apache.jsp.index_jsp PWC6197: An error occurred at line: 5 in the jsp file: /index.jsp PWC6199: Generated servlet error: string:///index_jsp.java:52: cannot find symbol symbol : class TestClass location: class org.apache.jsp.index_jsp The actual Java class is in "WEB-INF/classes/test/TestClass.class" (it is pre-compiled). I really have no idea what I'm doing wrong so any help is greatly appreciated. Thanks!

    Read the article

  • Patterns / Solutions to complicated Feature Management

    - by yclian
    Hi all, My company develops CDN / Web-Hosting solution. We have a middleware that's served as a business logic layer and exposes web service for the front-end. I would like to seek for a clean solution to feature management - there're uncertainties and ugly workarounds/solutions in the software that the dev would say "when it happens or is broken, we will fix it". For example, here're the following features that a web publisher can have: Sites limit Bandwidth limit SSL feature + SSL configuration per site If we downgrade a web publisher, when he's having 10 sites, down to 5 sites, we can choose not to suspend the rest of the 5 sites, or we shall prompt for suspension before the downgrade. For the case of bandwidth limit, the downgrade is easy, when the bandwidth check happens, if the publisher has it exceeded, then we will suspend his account. For the case of SSL feature. Every SSL configuration is tied to a site, what shall happen to these configuration object when the SSL feature is downgraded from enabled to disabled? So as you can see, there're many different situations and there are different ways of handling it. I can make a system that examines the impacts and prompts the user to make changes before the downgrade/upgrade. Or a system that ignores the impacts and just upgrade/downgrade. Bad. Or a system designed in a way that the client code need to be aware of the complex feature matrix (or I can expose a helper to the client code to check if a feature is not DEFUNCT) There can be many ways that I am still thinking but puzzled. I am wondering, how would you tackle this issue and is there any recommended patterns or books or software that you think I can refer to? Appreciate your help.

    Read the article

  • Groovy htmlunit getFirstByXPath returning null

    - by StartingGroovy
    I have had a few issues with HtmlUnit returning nulls lately and am looking for guidance. each of my results for grabbing the first row of a website have returned null. I am wondering if someone can A) explain why they might be returning null B) explain better ways (if there are some) to go about getting the information Here is my current code (URL is in the source): client = new WebClient(BrowserVersion.FIREFOX_3) client.javaScriptEnabled = false def url = "http://www.hidemyass.com/proxy-list/" page = client.getPage(url) IpAddress = page.getFirstByXPath("//html/body/div/div/form/table/tbody/tr/td[2]").getValue() println "IP Address is: $data" //returns null //Port_Number is an Image Country = page.getFirstByXPath("//html/body/div/div/form/table/tbody/tr/td[4][@class='country']/@rel").getValue() println "Country abbreviation is: $Country" //differentiate speed and connection by name of gif? Type = page.getFirstByXPath("//html/body/div/div/form/table/tbody/tr/td[7]").getValue() println "Proxy type is: $Type" Anonymity = page.getFirstByXPath("//html/body/div/div/form/table/tbody/tr/td[8]").getValue() println "Anonymity Level is: $Anonymity" client.closeAllWindows() Right now all of my XPaths return null and .getValue() obviously doesn't work on null. I also have questions as to what I should do about the PORT since it is an image? Is there a better alternative than downloading it and attempting to solve it by OCR? Side Note There is no significance in this site, I was just looking for a site that I could practice scraping on (the last one I ran into issues of fragment identities and couldn't get an answer to: HtmlUnit getByXpath returns null and HtmlUnit and Fragment Identities )

    Read the article

  • Javascript document.open asynchronous?

    - by Alex Schneider
    So on my site there is a Javascript function that will load a new site from the server via XMLHttpRequest. After that it replaces the current page with the new one: var post = new XMLHttpRequest(); post.open('POST', data); post.onload = function() { var do = document.open("text/html", "replace"); do.write(post.responseText); do.close(); goOn(); } function goOn() { console.log($('img:visible')); } Some could assume that after do.close() the document has changed and is ready. But it is not, e.g. if i load very much/big data/responseText the function goOn() only logs an empty result. Obviously goOn() gets in that case called before the DOM is ready to be read! Unfortunately the is no "ready" event fired after write() finished.... How can i be sure it is finished? /EDIT: goOn() logs this to Chrome Console: [prevObject: p.fn.p.init[1], context: #document, selector: "img:visible"] context: #document length: 0 prevObject: p.fn.p.init[1] selector: "img:visible" __proto__: Object[0] But if i right after that type $('img:visible') into console manually it shows me all images....

    Read the article

  • SSL + Jquery + Ajax

    - by chobo2
    Hi I starting too look at a bit of security into my site. My site I would consider a very low security risk as it has really no personal information from the user other than email. However the security risk will go up a bit as I am partnering with a company and the initial password for this companies users will be the same password they use essentially to get onto the network and every piece of software. So I have up my security( what is fine by me...I wanted to get around to this anyways). So one of my security concerns is this. A user logs in. form submit(non ajax is done). Password is hashed & Salted and compared to one in the database. Reject or let them proceed. So this uses no jquery or ajax but is just asp.net mvc and C#. Still if my understanding is right the password is sent in clear text. So if a use SSL and I would not need to worry about that is this correct? If that is true is that all I need? Second the user can change their password at anytime. This is done through ajax. So when the password is sent it is sent in clear text( and I can verify this by looking at firebug). So if I have SSL enabled on this page is that all I need or do I need to do more? So I am just kinda confused of what I need to make the password being sent to the server(both ajax and full post ways secure). I am not sure if I need to do more then SSL or if that is enough and if it is not enough what is the next layer of security?

    Read the article

  • The Current State Of Serving a PHP 5.x App on the Apache, LightTPD & Nginx Web Servers?

    - by Gregory Kornblum
    Being stuck in a MS stack architecture/development position for the last year and a half has prevented me from staying on top of the world of open source stack based web servers recent evolution more than I would have liked to. However I am now building an open source stack based application/system architecture and sadly I do not have the time to give each of the above mentioned web servers a thorough test of my own to decide. So I figured I'd get input from the best development community site and more specifically the people who make it so. This is a site that is a resource for information regarding a specific domain and target audience with features to help users not only find the information but to also interact with one another in various ways for various reasons. I chose the open source stack for the wealth of resources it has along with much better offers than the MS stack (i.e. WordPress vs BlogEngine.NET). I feel Java is more in the middle of these stacks in this regard although I am not ruling out the possibility of using it in certain areas unrelated to the actual web app itself such as background processes. I have already come to the conclusion of using PHP (using CodeIgniter framework & APC), MySQL (InnoDB) and Memcached on CentOS. I am definitely serving static content on Nginx. However the 3 servers mentioned have no consensus on which is best for dynamic content in regards to performance. It seems LightTPD still has the leak issue which rules it out if it does, Nginx seems it is still not mature enough for this aspect and of course Apache tries to be everything for everybody. I am still going to compile the one chosen with as many performance tweaks as possible such as static linking and the likes. I believe I can get Apache to match the other 2 in regards to serving dynamic content through this process and not having it serve anything static. However during my research it seems the others are still worth considering. So with all things considered I would love to hear what everyone here has to say on the matter. Thanks!

    Read the article

  • Changing the modified date of a message in Exchange 2010

    - by jgoldschrafe
    My organization is in the middle of a process to move their Exchange 2010 messaging system from one archiving platform to another. As part of this process, we need to restore all archived messages back into users' email accounts, and then let the new system import them again. The problem is that when the messages are dumped back, the modified date on the message is set to the date it was restored, which trips up message archiving and basically means nobody will have anything archived for six months. So you don't have to ask: no, our archiving platform only uses the modified timestamp on the message and cannot be altered to temporarily use the sent or received timestamp instead to determine whether to archive it. We and others have asked for the feature, but it doesn't exist right now. What we're looking for is a method to go through the user's mailbox and alter the modified timestamp of each message (or preferably received more than X months ago) to the received date of the message. We also don't want to spend more on this tool per user than we're spending on the archiving solution in the first place. We've run across a few tools that are something ridiculous like $25 per user. I don't think we're even paying close to that for Exchange and the archiving solution put together. Whatever we settle on should function on a live mailbox with no downtime. Playing around with PST imports and hacky little things like that isn't going to work. We're fine with programming/scripting, if anyone knows the best way through PowerShell, COM automation or some other way to best handle this.

    Read the article

  • Good Replacement for User Control?

    - by David Lively
    I found user controls to be incredibly useful when working with ASP.NET webforms. By encapsulating the code required for displaying a control with the markup, creation of reusable components was very straightforward and very, very useful. While MVC provides convenient separation of concerns, this seems to break encapsulation (ie, you can add a control without adding or using its supporting code, leading to runtime errors). Having to modify a controller every time I add a control to a view seems to me to integrate concerns, not separate them. I'd rather break the purist MVC ideology than give up the benefits of reusable, packaged controls. I need to be able to include components similar to webforms user controls throughout a site, but not for the entire site, and not at a level that belongs in a master page. These components should have their own code not just markup (to interact with the business layer), and it would be great if the page controller didn't need to know about the control. Since MVC user controls don't have codebehind, I can't see a good way to do this. I've searched previous SO questions, and have yet to find a good answer. Options so far In an attempt to avoid turning the comments section into a discussion... RenderAction This allows the view to call another controller, which will be responsible for interacting with the BLL and whatever data is necessary to its corresponding view. The calling view needs to be aware of the sub controller. This seems to provide a nice way to encapsulate partial views and controls, without having to modify the calling controller. RenderPartial The calling controller is still responsible for executing whatever code is associated with the partial view, and making sure that the model passed to the partial view contains the data it expects. Effectively, modifying the partial view potentially means modifying the calling controller. Annoying especially if this is used in multiple places. Portable Areas Place each control in its own project/area?

    Read the article

  • How to automatically show Title of the Entries/Articles in the Browser Title Bar in ExpressionEngine 2?

    - by Ibn Saeed
    How would I output the title of an entry in ExpressionEngine and display it in the browser's title bar? Here is the content of my page's header: <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Test Site</title> <link rel="stylesheet" href="{stylesheet=site/site_css}" type="text/css" media="screen" /> </head> What I need is for each page to display the title of the entry in my browser's title bar — how can I achieve that? Part of UPDATED Code: Here is how i have done it : {exp:channel:entries channel="news_articles" status="open|Featured Top Story|Top Story" limit="1" disable="member_data|trackbacks|pagination"} {embed="includes/document_header" page_title=" | {title}"} <body class="home"> <div id="layoutWrapper"> {embed="includes/masthead_navigation"} <div id="content"> <div id="article"> <img src="{article_image}" alt="News Article Image" /> <h4>{title}</h4> <h5><span class="by">By</span> {article_author}</h5> <p>{entry_date format="%M %d, %Y"} -- Updated {gmt_edit_date format="%M %d, %Y"}</p> {article_body} {/exp:channel:entries} </div> What do you think?

    Read the article

  • ReWriteRule is redirecting rather rewriting

    - by James Doc
    At the moment I have two machines that I do web development on; an iMac for work at the office and a MacBook for when I have to work on the move. They both running OS X 10.6 have the same version of PHP, Apache, etc running on them. Both computers have the same files of the website, including the .htaccess file (see below). On the MacBook the URLs are rewritten nicely, masking the URL they are pointing to (eg site/page/page-name), however on the iMac they simply redirect to the page (eg site/index.php?method=page&value=page-name) which is making switching back and forth between machines a bit of a pain! I'm sure it must be a config setting somewhere, but I can't for the life of me find it. Has anyone got a remedy? Many thanks. I'm fairly convinced there is a much nice way of writing this htaccess file without loosing access several key folders as well! Options +FollowSymlinks RewriteEngine on RewriteBase /In%20Progress/Vila%20Maninga/ RewriteRule ^page/([a-z|0-9_&;=-]+) index.php?method=page&value=$1 [NC] RewriteRule ^tag/([a-z|0-9_]+) index.php?method=tag&value=$1 [NC] RewriteRule ^search/([a-z|0-9_"]+) index.php?method=search&value=$1 [NC] RewriteRule ^modpage/([con0-9-]+) index.php?method=modpage&value=$1 [NC] RewriteRule ^login index.php?method=login [NC] RewriteRule ^logout index.php?method=logout [NC] RewriteRule ^useraccounts index.php?method=useraccounts [NC]

    Read the article

  • What runs faster? Wordpress or Drupal 6.x?

    - by electblake
    So... I run a pretty large Wordpress blog. Currently it gets around 20k+ pageviews a day, and its always a struggle to keep the bad boy running quickly - I currently run a vps.net with CentOS 5.3 I am also Drupal developer by trade so I love the CMS Framework for its versatility and the portability (I can take work from one site and implement on another with great ease) MY QUESTION IS: What is faster then? Wordpress 3.x & Drupal 6.x I'd love to migrate my site to Drupal to be able to roll out new features etc (which I find awkward to do in Wordpress) but I am scared that Drupal may not be able to handle the traffic. Any opinions? I know that some major players use Drupal - as Dries documents well on his blog but I'm not under any illusions that Drupal can be a real hog. Thanks for any/all help! Please try to avoid server optimization talk unless it pertains to Wordpress or Drupal 6.x specifically, I love to learn more about optimizations but I do want to sort out which platform is quicker :) p.s - I realize the fastest option is to use a lower-level framework (with less overhead) like CakePHP etc but assume that isn't an option ;)

    Read the article

  • HTML relative links on various domains

    - by Adam Kiss
    I have quickie: When you code/develop themes, how do you link to various files in your html/css code? Example: We at our firm use mostly <base target="http://whatever"> in our main template and then just <img src="./images/file.png"> in our html, "/category/page" as links and something alike in our css. However, when testing on different machines, we use ip address rather than localhost on main dev station of coder, so all base links don't work (because localhost goes to viewing machine, not coder's, in our network). Same thing happens when updating pages - on dev server, we have to edit base target, so browsing site won't take us to live site - this part is actually rather simple PHP (if ... echo else echo something else), but it still not solve problem of more coding-testing problems. So, my question is, how do YOU solve it? How do you use relative links, which basically don't care for what domain is the page on and don't care for url rewrite? (because ../images/ is different for / and different for /something/somethingElse/page)?

    Read the article

  • How to provide a temporary URL for custom domain in Wordpress multisite install?

    - by Milan Babuškov
    I have a website with Wordpress 3.0.4 installation, set up as multisite install. Some users register their blogs as something.mydomain.com and that works automatically. However, some users prefer to use their own domain names like something.com. This also works fine once they set up the CNAME record to point to my server. However, it takes 24-48 hours for that change to take effect. I'd like to be able to offer the user a temporary URL that would work out-of-the-box until the DNS changes are propagated, but I have not idea how to do it? For example: something.com should also be accessible as: something.tempdomain.com I have control over "tempdomain" DNS setup. I thought about replacing $_SERVER variables in index.php or .htaccess file when temporary domain is accessed, and this works for the first page load. However, all the links in generated page point to original domain which is not yet ready. UPDATE: I managed to get it working for the site itself by manipulating $_SERVER variables so Wordpress thinks it's creating a page for different site. I did this in index.php, so before any WP code is run I'm using ob_start and ob_get_contents later to get the page generated by Wordpress and then str_replace the links back to temporary domain. The problem I still have is the admin page. Even though the link says: http://site1.tempdomain.com/wp-admin when opened in browser it redirects to maindomain.com/wp-signup.php?new=site1.tempdomain I don't understand how WP detects that I supplied "fake" domain when $_SERVER vars are changed?

    Read the article

  • Ideal HTTP cache control headers for different types of resources

    - by chris_l
    I want to find a minimal set of headers, that work with "all" caches and browsers (also when using HTTPS!) On my (GWT-based) web site, I'll have three kinds of resources: 1. Forever cacheable (public / equal for all users) These files don't ever change, and they get a filename based on the MD5 of their contents (this is GWT's approach). They should get cached as much as possible, even when using HTTPS (so I assume, I should set Cache-Control: public, especially for Firefox?) 2. Changing for every new version of the site (public / equal for all users) These files can be cached, but probably need to be revalidated every time. 3. Individual for each request (private / user specific) These resources (e. g. JSON responses) should never be cached unencrypted to disk under no circumstances. (Maybe I'll have a few specific requests that could be cached.) I have a general idea on which headers I would probably use for each type, but there's always something I could be missing.

    Read the article

  • DFS Root namespace is RDWR for all users

    - by Patrick
    We have an existing DFS Replication and Namespace group that we use to serve the company's files. This has been operating fine for us for some time now, and continues to do so. however a situation arose yesterday afternoon that has led us to be stumped. The problem is that we have our name space presented as : \\domain.co.uk\public\[8 or 9 folders that are mapped to the users in the business] We had a problem this morning that meant that a number of users started mapping their AD Home Drive directly to the \\domain.co.uk\public directory and we found that they had read/write. This rapidly became a problem as a at least one director saved some moderately sensitive documents in there and basically anyone could read them. I've tidied up that specific problem with some deft scripting and a slight modification of group policy. However I would like to make \public read only, the trouble is I can't work out where the ACLs for that folder would be held. All the folders that are presented as \\domain.co.uk\public\[folder] are 'real' folders on logical volumes on our DFS servers so are secured with groups that are applied via the 'security' tab. I'd like to do the same on \public but I can't find it. I have looked through amongst other things \Sysvol\domain.co.uk but can't find it and after a lot of clicking and a bit of reading I can't see how to lock it down. Any thoughts?

    Read the article

  • How to use urlencoded urls with the GA tracking code

    - by Fake51
    I've got a site where a booking page has an iframe embedded with the actual booking form. I need to track traffic from the parent site to the child iframe. This should all work just fine with the normal GA code, using javascript like: <script type="text/javascript"> try { var pageTracker = _gat._getTracker("<UA CODE HERE>"); pageTracker._setDomainName("none"); pageTracker._setAllowLinker(true); pageTracker._setAllowHash(false); pageTracker._trackPageview(); } catch(err) {}</script> And then ofcourse using the _getLinkerUrl() function to get a url with the proper parameters. So far so good - this basically works (at least I know the principle works as I've got it working on other pages). However, and this is the problem: the server that serves up the page in the iframe was configured by a complete and utter moron (or, alternatively, created by a complete and utter moron). It chokes on '=' characters, so in order to request the iframe page I need to urlencode the '=' signs - but the GA code seems unable to parse the url when this is done. So the questions: 1. has anyone come across this? 2. does anyone know of any solutions to this problem?

    Read the article

  • Redirect requests only if the file is not found?

    - by ZenBlender
    I'm hoping there is a way to do this with mod_rewrite and Apache, but maybe there is another way to consider too. On my site, I have directories set up for re-skinned versions of the site for clients. If the web root is /home/blah/www, a client directory would be /home/blah/www/clients/abc. When you access the client directory via a web browser, I want it to use any requested files in the client directory if they exist. Otherwise, I want it to use the file in the web root. For example, let's say the client does not need their own index.html. Therefore, some code would determine that there is no index.html in /home/blah/www/clients/abc and will instead use the one in /home/blah/www. Keep in mind that I don't want to redirect the client to the web root at any time, I just want to use the web root's file with that name if the client directory has not specified its own copy. The web browser should still point to /clients/abc whether the file exists there or in the root. Likewise, if there is a request for news.html in the client directory and it DOES exist there, then just serve that file instead of the web root's news.html. The user's experience should be seamless. I need this to work for requests on any filename. If I need to, for example, add a new line to .htaccess for every file I might want to redirect, it rather defeats the purpose as there is too much maintenance needed, and a good chance for errors given the large number of files. In your examples, please indicate whether your code goes in the .htaccess file in the client directory, or the web root. Web root is preferred. Thanks for any suggestions! :)

    Read the article

  • Controller actions appear to be synchronous though on different requests?

    - by Oded
    I am under the impression that the below code should work asynchronously. However, when I am looking at firebug, I see the requests fired asynchronously, but the results coming back synchronously: Controller: [HandleError] public class HomeController : Controller { public ActionResult Status() { return Content(Session["status"].ToString()); } public ActionResult CreateSite() { Session["status"] += "Starting new site creation"; Thread.Sleep(20000); // Simulate long running task Session["status"] += "<br />New site creation complete"; return Content(string.Empty); } } Javascript/jQuery: $(document).ready(function () { $.ajax({ url: '/home/CreateSite', async: true, success: function () { mynamespace.done = true; } }); setTimeout(mynamespace.getStatus, 2000); }); var mynamespace = { counter: 0, done: false, getStatus: function () { $('#console').append('.'); if (mynamespace.counter == 4) { mynamespace.counter = 0; $.ajax({ url: '/home/Status', success: function (data) { $('#console').html(data); } }); } if (!mynamespace.done) { mynamespace.counter++; setTimeout(mynamespace.getStatus, 500); } } } Addtional information: IIS 7.0 Windows 2008 R2 Server Running in a VMWare virutual machine Can anyone explain this? Shouldn't the Status action be returning practically immediately instead of waiting for CreateSite to finish? Edit: How can I get the long running process to kick off and still get status updates?

    Read the article

  • PHP's fopen is terminally failing

    - by Skittles
    Okay, I have GOT to be missing something totally rudimentary here. I have an extremely simple use of PHP's fopen function, but for some reason, it will not open the file no matter what I do. The odd part about this is that I use fopen in another function in the same script and it's working perfectly. I'm using the fclose in both functions. So, I know it's not a matter of a rogue file handle. I have confirmed the file's path and the existence of the target file also. I'm running the script at the command-line as root, so I know it's not apache that's the cause. And since I am running the script as root, I am fairly confident that permissions are not the issue. So, what on earth am I missing here? function get_file_list() { $file = '/home/site/tmp/return_files_list.txt'; $fp = fopen($file, 'r') or die("Could not open file: /home/site/tmp/return_files_list.txt for reading.\n"); $files_list = array(); while($line = fgets($fp)) { $files_list[] = $line; } fclose($fp); return $files_list; } function num_records_in_file($filename) { $fp = fopen( $filename, 'r' ); # or die("Could not open file: $filename\n"); $counter = 0; if ($fp) { while (!feof( $fp )) { $line = fgets( $fp ); $arr = explode( '|', $line ); if (( ( $arr[0] != 'HDR' && $arr[0] != 'TRL' ) && $arr[0] != '' )) { ++$counter; continue; } } } fclose( $fp ); return $counter; } As requested, here's both functions. The second function is passed an absolute path to the file. That is what I used to confirm that the file is there and that the path is correct.

    Read the article

  • Java website protection solutions (especially XSS)

    - by Mark
    I'm developing a web application, and facing some security problems. In my app users can send messages and see other's (a bulletin board like app). I'm validating all the form fields that users can send to my app. There are some very easy fields, like "nick name", that can be 6-10 alpabetical characters, or message sending time, which is sended to the users as a string, and then (when users ask for messages, that are "younger" or "older" than a date) I parse this with SimpleDateFormat (I'm developing in java, but my question is not related to only java). The big problem is the message field. I can't restrict it to only alphabetical characters (upper or lowercase), because I have to deal with some often use characters like ",',/,{,} etc... (users would not be satisfied if the system didn't allow them to use these stuff) According to this http://ha.ckers.org/xss.html, there are a lot of ways people can "hack" my site. But I'm wondering, is there any way I can do to prevent that? Not all, because there is no 100% protection, but I'd like a solution that can protect my site. I'm using servlets on the server side, and jQuery, on the client side. My app is "full" AJAX, so users open 1 JSP, then all the data is downloaded and rendered by jQuery using JSON. (yeah, I know it's not "users-without-javascript" friendly, but it's 2010, right? :-) ) I know front end validation is not enough. I'd like to use 3 layer validation: - 1. front end, javascript validate the data, then send to the server - 2. server side, the same validation, if there is anything, that shouldn't be there (because of client side javascript), I BAN the user - 3. if there is anything that I wasn't able to catch earlier, the rendering process handle and render appropriately Is there any "out of the box" solution, especially for java? Or other solution that I can use?

    Read the article

  • Kernel Compiling from Vanilla to several machines

    - by Linux Pwns Mac
    When compiling kernels for machines is there a safe or correct way to create a template for say servers? I work with a lot of RHEL servers and want to compile them with GRSEC. However, I do not wish to always rebuild off of the .config for each machine and go in and remove a bunch of unrelated modules like wireless, bluetooth, ect... which you typically do not need in servers. I want to create a template .config that can be used on any machine, but is there a safe way to do that when hardware changes? I know with Linux, at least from my experience, you can cross jump hardware way easier then Windows/OSX. I assume that as long as I leave MOST of all the main hardware modules/CPU in that this could create a .config that would work for all or just about any machine?

    Read the article

  • How to use Node.js to build pages that are a mix between static and dynamic content?

    - by edt
    All pages on my 5 page site should be output using a Node.js server. Most of the page content is static. At the bottom of each page, there is a bit of dynamic content. My node.js code currently looks like: var http = require('http'); http.createServer(function (request, response) { console.log('request starting...'); response.writeHead(200, { 'Content-Type': 'text/html' }); var html = '<!DOCTYPE html><html><head><title>My Title</title></head><body>'; html += 'Some more static content'; html += 'Some more static content'; html += 'Some more static content'; html += 'Some dynamic content'; html += '</body></html>'; response.end(html, 'utf-8'); }).listen(38316); I'm sure there are numerous things wrong about this example. Please enlighten me! For example: How can I add static content to the page without storing it in a string as a variable value with += numerous times? What is the best practices way to build a small site in Node.js where all pages are a mix between static and dynamic content?

    Read the article

  • PHP & WP: Render Certain Markup Based on True False Condition

    - by rob
    So, I'm working on a site where on the top of certain pages I'd like to display a static graphic and on some pages I would like to display an scrolling banner. So far I set up the condition as follows: <?php $regBanner = true; $regBannerURL = get_bloginfo('stylesheet_directory'); //grabbing WP site URL ?> and in my markup: <div id="banner"> <?php if ($regBanner) { echo "<img src='" . $regBannerURL . "/style/images/main_site/home_page/mock_banner.jpg' />"; } else { echo 'Slider!'; } ?> </div><!-- end banner --> In my else statement, where I'm echoing 'Slider!' I would like to output the markup for my slider: <div id="slider"> <img src="<?php bloginfo('stylesheet_directory') ?>/style/images/main_site/banners/services_banners/1.jpg" alt="" /> <img src="<?php bloginfo('stylesheet_directory') ?>/style/images/main_site/banners/services_banners/2.jpg" alt="" /> <img src="<?php bloginfo('stylesheet_directory') ?>/style/images/main_site/banners/services_banners/3.jpg" alt="" /> ............. </div> My question is how can I throw the div and all those images into my else echo statement? I'm having trouble escaping the quotes and my slider markup isn't rendering.

    Read the article

< Previous Page | 554 555 556 557 558 559 560 561 562 563 564 565  | Next Page >