Search Results

Search found 2558 results on 103 pages for 'highly irregular'.

Page 79/103 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • Take Current Snapshot of DB and send it to FTP in same PHP Scripts: Advice Needed

    - by Rachel
    Not sure if I can do it this way. I want to get current snapshot of the database and send it via FTP Server, both of this functionality should be implemented in PHP scripts. Here are the steps I am thinking on right now. In my php scripts(basically am extending an PDO into my Dao class and then preparing the query), $qry = SELECT * FROM MyTablename; $stmt = $this->prepare($qry); $stmt = $this->execute(); Now I will store $stmt in csv file using fputcsv or I will execute the sql command from the script itself and than try to store the result in the $file(csv file) note here that I do not have any csv file with me at this point to basically I will have to create one and let's say its $file, so then $file = fputcsv($stmt); or $file = exec("Select * from MyTablename"); Will this put all records in the file ? If yes, then I will use FTP Functionality to transfer file to the FTP Folder. I am not sure if this approach would work and also have concerns regarding the need of preparing the $qry Any suggestions or different approach advised would be highly appreciated. Thanks !!!

    Read the article

  • How VerticalOffset changes when Scrollable height changes while having list inside a list

    - by Prakash
    I am making a WP7 app which has a Listbox of UserControls. Each UserControl has an ItemsControl and Button(for getting more results). On click of the button the ItemsControl items will be increased by 5 or 10. Now on clicking on the GetMore button of any of the usercontrols except the first or last, there will be an increase in Scrollable height(Total height of the listbox) of the ListBox but the VerticalOffset(position of scrollbar from top) of the ListBox remains same. Now the problem I am facing is that the Vertical Offset is not absolute but relative to Scrollable Height. So the content being viewed till then will be changed basing on the new value of ScollableHeight. I want to know the relation between them, so that I can do some math and set the VerticalOffset value. I have added some dependency properties on VerticalOffset and ScrollableHeight through which I can get the events when any of them is changed. Also trying to use them to readjust the VerticalOffset. Any suggestions or corrections are highly appreciated.

    Read the article

  • Best practises for Magento Deployment

    - by Spongeboy
    I am looking setting up a deployment process for a highly customised Magento site, and was wondering how other people do this. I will be setting up dev, UAT and prod environments. All the Magento files will be in source control (SVN). At this stage, I can't see any requirements for changing the DB, so the 3 databases will be manually maintained. Specifically, How do you apply Magento upgrades? (Individually in each env, or on dev then roll out, or just give up on upgrades?) What files/folders do leave alone in each environment (e.g. magento/app/etc/local.xml) Do you restrict developers to editing specific files/folders? Do you restrict theme designers to editing specific files/folders? How do you manage database changes? Theme Designer Files/Folders Designers can restricted to editing the following folders- app/design/frontend/your_interface/your_theme/layout/ app/design/frontend/your_interface/your_theme/template/ app/design/frontend/your_interface/your_theme/locale/ skin/frontend/your_interface/your_theme/ Extension Developer Files/Folders Extension developers can edit the following folders/files- /app/code/local /app/etc/modules/<Namespace>_<Module>.xml Database environment management As the store's base URL is stored in the database, you cannot just copy databases between environments. Options include- Overriding the base url in php. Blog article on setting up dev and staging databases Changing the base url in the database after copying. (Where is this stored?) Doing a MySQLDump or backup, then doing a replace on the URL in the SQL file.

    Read the article

  • Memory leak when changing Text field of a Scintilla object.

    - by PlaZmaZ
    I have a relatively large program that I'm optimizing for ASCII input files around 10-80mB in size. The program reads every line of the file into a stringbuilder and then sets the Text field of the ScintillNET object to the stringbuilder. The stringbuilder is then set to null. private void ReloadFile(string sFile) { txt_log.ResetText(); try { StringBuilder sLine = new StringBuilder(""); using (StreamReader sr = new StreamReader(sFile)) { while (true) { string temp = sr.ReadLine(); if (temp == null) break; sLine.AppendLine(temp); } sr.Close(); } txt_log.Text = sLine.ToString(); sLine = null; } catch (Exception ex) { MessageBox.Show(this, "An error occurred opening this file.\n\n" + ex.Message, "File Open Error", MessageBoxButtons.OK, MessageBoxIcon.Error); } GC.Collect(); } The program has an option to reload or open a file. This is irrelevant, as any call to txt_log.Text seems to not get rid of the previous memory used for the .Text field. Commenting out the txt_log.Text line gives proper memory behavior. The GC.Collect() line seems pointless, and I have tried both with and without it. Is there something I'm missing here? I HIGHLY doubt it's a problem with the ScintillaNET component itself--rather something in this code.

    Read the article

  • Java SSH2 libraries in depth: Trilead/Ganymed/Orion [/other?]

    - by Bernd Haug
    I have been searching for a pure Java SSH library to use for a project. The single most important needed feature is that it has to be able to work with command-line git, but remote-controlling command-line tools is also important. A pretty common choice, e.g. used in the IntelliJ IDEA git integration (which works very well), seems to be Trilead SSH2. Looking at their website, it's not being maintained any more. Trilead seems to have been a fork of Ganymed SSH2, which was a ETH Zurich project that didn't see releases for a while, but had a recent release by its new owner, Christian Plattner. There is another actively maintained fork from that code base, Orion SSH, that saw an even more recent release, but which seems to get mentioned online much less than the other 2 forks. Has anybody here worked with any of (or, if possible, both) of Ganymed and Orion and could kindly describe the development experience with either/both? Accuracy of documentation [existence of documentation?], stability, buggyness... - all of these would be highly interesting to me. Performance is not so important for my current project. If there is another pure-Java SSH implementation that should be used instead, please feel free to mention it, but please don't just mention a name...describe your judgment from actual experience. Sorry if this question may seem a bit "do my homework"-y, but I've really searched for reviews. Everything out there seems to be either a listing of implementations or short "use this! it's great!" snippets.

    Read the article

  • Getting Depth Value on Kinect SDK 1.6

    - by AlexanderPD
    this is my first try on Kinect and Kinect SDK so I'm having a lot of "newbie issues" :) my goal is to point my mouse on the Kinect standard video output and get the depth value. I already have both normal video and depth video outputs by using the 2 "Color Basic-WPF" and "Depth Basic-WPF" samples, and handling mouse events or position is not a problem. In fact i already did all and i already got a depth value, but this value is always HIGHLY imprecise. It jumps from 500 to 4000 by just moving to the next pixel in a plane surface. So.. I'm pretty sure I'm reading the depth value in the wrong way. This is how i read it: short debugValue = depthPixels[x*y].Depth; debug.Text = "X = "+x+", Y = "+y+", value = "+debugValue.ToString(); i know it's pretty out of context, this little piece of code is inside the same SensorDepthFrameReady function in "Depth Basic-WPF"! "x" and "y" are the mouse coordinates and depthPixels is DepthImagePixel[] type, a temporary array filled with the "depthFrame.CopyDepthImagePixelDataTo(this.depthPixels);" instruction. Depth frame is filled here: DepthImageFrame depthFrame = e.OpenDepthImageFrame() the "e" comes from here: private void SensorDepthFrameReady(object sender, DepthImageFrameReadyEventArgs e) and this last one is called here: this.sensor.DepthFrameReady += this.SensorDepthFrameReady; how i must handle that depth value i get? I know the value must be between 800 and 4000 but i get values between about 500 and about 8000. i already google a lot (here on SO too) and i still can't understand if the depth value is 11 or 13 bit. The sdk examples uses shrink this value to 8 bit and this is making even more confusion in my head :(

    Read the article

  • Passing Binary Data to a Stored Procedure in SQL Server 2008

    - by Joe Majewski
    I'm trying to figure out a way to store files in a database. I know it's recommended to store files on the file system rather than the database, but the job I'm working on would highly prefer using the database to store these images (files). There are also some constraints. I'm not an admin user, and I have to make stored procedures to execute all the commands. This hasn't been of much difficulty so far, but I cannot for the life of me establish a way to store a file (image) in the database. When I try to use the BULK command, I get an error saying "You do not have permission to use the bulk load statement." The bulk utility seemed like the easy way to upload files to the database, but without permissions I have to figure a work-a-round. I decided to use an HTML form with a file upload input type and handle it with PHP. The PHP calls the stored procedure and passes in the contents of the file. The problem is that now it's saying that the max length of a parameter can only be 128 characters. Now I'm completely stuck. I don't have permissions to use the bulk command and it appears that the max length of a parameter that I can pass to the SP is 128 characters. I expected to run into problems because binary characters and ascii characters don't mix well together, but I'm at a dead end... Thanks

    Read the article

  • Go for Zend framework or Django for a modular web application?

    - by dr. squid
    I am using both Zend framework and Django, and they both have they strengths and weakness, but they are both good framworks in their own way. I do want to create a highly modular web application, like this example: modules: Admin cms articles sections ... ... ... I also want all modules to be self contained with all confid and template files. I have been looking into a way to solve this is zend the last days, but adding one omer level to the module setup doesn't feel right. I am sure this could be done, but should I? I have also included Doctrine to my zend application that could give me even more problems in my module setup! When we are talking about Django this is easy to implement (Easy as in concept, not in implementation time or whatever) and a great way to create web apps. But one of the downsides of Django is the web hosing part. There are some web hosts offering Django support, but not that many.. So then I guess the question is what have the most value; rapid modular development versus hosting options! Well, comments are welcome! Thanks

    Read the article

  • Opaque tenant identification with SQL Server & NHibernate

    - by Anton Gogolev
    Howdy! We're developing a nowadays-fashionable multi-tenanted SaaS app (shared database, shared schema), and there's one thing I don't like about it: public class Domain : BusinessObject { public virtual long TenantID { get; set; } public virtual string Name { get; set; } } The TenantID is driving me nuts, as it has to be accounted for almost everywhere, and it's a hassle from security standpoint: what happens if a malicious API user changes TenantID to some other value and will mix things up. What I want to do is to get rid of this TenantID in our domain objects altogether, and to have either NHibernate or SQL Server deal with it. From what I've already read on the Internets, this can be done with CONTEXT_INFO (here's a NHibernatebased implementation), NHibernate filters, SQL Views and with combination thereof. Now, my requirements are as follows: Remove any mentions of TenantID from domain objects ...but have SQL Server insert it where appropriate (I guess this is achieved with default constraints) ...and obviously provide support for filtering based on this criteria, so that customers will never see each other's data If possible, avoid SQL Server views. Have a solution which plays nicely with NHibernate, SQL Servers' MARS and general nature of SaaS apps being highly concurrent What are your thoughts on that?

    Read the article

  • Need a tool to search large structure text documents for words, phrases and related phrases

    - by pitosalas
    I have to keep up with structured documents containing things such as requests for proposals, government program reports, threat models and all kinds of things like that. They are in techno-legalese as I would call them: highly structured, with section numbering and 3, 4 and 5 levels of nesting. All in English I need a more efficient way to locate those paragraphs of nuggets that matter to me. So what I’d like is kind of a local document index/repository, that would allow me to have some standing queries and easily locate sections in documents that talk about my queries. Here’s an example: I’d like to load in 10 large PDF files, each of say 100 pages. Each PDF contains English text, formatted very nicely into paragraphs and sections. I’d like to specify that I am interested in “blogging platforms”, “weaknesses in Ruby”, “localization and internationalization” Ideally then look at a list that showed the section of text, the name of the document, and other information that seemed to be related to and/or include the words and phrases I specified. I am sure something like this exists. I would call it something like document indexing, document comprehension or structured searching.

    Read the article

  • Are document-oriented databases any more suitable than relational ones for persisting objects?

    - by Owen Fraser-Green
    In terms of database usage, the last decade was the age of the ORM with hundreds competing to persist our object graphs in plain old-fashioned RMDBS. Now we seem to be witnessing the coming of age of document-oriented databases. These databases are highly optimized for schema-free documents but are also very attractive for their ability to scale out and query a cluster in parallel. Document-oriented databases also hold a couple of advantages over RDBMS's for persisting data models in object-oriented designs. As the tables are schema-free, one can store objects belonging to different classes in an inheritance hierarchy side-by-side. Also, as the domain model changes, so long as the code can cope with getting back objects from an old version of the domain classes, one can avoid having to migrate the whole database at every change. On the other hand, the performance benefits of document-oriented databases mainly appear to come about when storing deeper documents. In object-oriented terms, classes which are composed of other classes, for example, a blog post and its comments. In most of the examples of this I can come up with though, such as the blog one, the gain in read access would appear to be offset by the penalty in having to write the whole blog post "document" every time a new comment is added. It looks to me as though document-oriented databases can bring significant benefits to object-oriented systems if one takes extreme care to organize the objects in deep graphs optimized for the way the data will be read and written but this means knowing the use cases up front. In the real world, we often don't know until we actually have a live implementation we can profile. So is the case of relational vs. document-oriented databases one of swings and roundabouts? I'm interested in people's opinions and advice, in particular if anyone has built any significant applications on a document-oriented database.

    Read the article

  • How to get database table header information into an CSV File.

    - by Rachel
    I am trying to connect to the database and get current state of a table and update that information into csv file, with below mentioned piece of code, am able to get data information into csv file but am not able to get header information from database table into csv file. So my questions is How can I get Database Table Header information into an CSV File ? $config['database'] = 'sakila'; $config['host'] = 'localhost'; $config['username'] = 'root'; $config['password'] = ''; $d = new PDO('mysql:dbname='.$config['database'].';host='.$config['host'], $config['username'], $config['password']); $query = "SELECT * FROM actor"; $stmt = $d->prepare($query); // Execute the statement $stmt->execute(); var_dump($stmt->fetch(PDO::FETCH_ASSOC)); $data = fopen('file.csv', 'w'); while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) { echo "Hi"; // Export every row to a file fputcsv($data, $row); } Header information meaning: Vehicle Build Model car 2009 Toyota jeep 2007 Mahindra So header information for this would be Vehicle Build Model Any guidance would be highly appreciated.

    Read the article

  • How to rdc to a particular machine that is member of a TS Farm?

    - by Amit Arora
    I created a Terminal Services farm comprising of 3 TS hosts (say, TS1, TS2 and TS3) running Windows 2008 R2 Enterprise, a TS Connection broker and a TS Gateway for the purpose of hosting a windows application as a TS RemoteApp. The setup works just fine. Now, I want to do some further configuration changes on a particular TS host, say TS2 and not on any other TS host. I try to rdc to TS2 but I find myself getting connected to a randomly chosen TS host (sometimes TS1, sometimes TS2, and at other times, TS3). I think rdc connection is also going via the Connection Broker that is forwarding me to a TS host it decides is best. Is there a way I can deterministically connect to a particular TS host using rdc? I don't have option to login locally on a TS host as the entire setup is hosted in a remote data center. I think this is a very common scenario and must have a straight forward solution. It could be as easy as doing rdc to Connection Broker server and disabling it for a while, but I don't know how to do that too. Any help will be highly appreciated.

    Read the article

  • What do we log and why do we log it?

    - by Lucas
    This has been bugging me for quite some time. Reading various questions on SO, blogs and listening to colleagues, I keep hearing how important "logging" is. How various logging frameworks stack up against each other, and how there are so many to pick from it's (apparently) ridiculous. Now, I know what logging is. What I don't know is what is supposed to be logged and why. Sure, I can guess. Exceptions? Sounds like something one might want to log... but which exceptions? And is it only exceptions? And what do I do with the logged information? If it's an in-house app, then that could probably be put to good use, but if it's a commercial desktop application, how is the log of... whatever... helping anyone? I doubt regular users would be peeking inside. Is it then something you ask the users to provide on request? I'm deeply frustrated by my own ignorance in this. It's also surprising how little information there is about this. The info on the websites of the various logging frameworks is all written for an audience that already knows what it wants to log, and knows why it needs to do so. Same things goes for the various discussions on SO about logging, like for instance this highly voted up question on Logging best practices. For a question with so many votes, it's almost comical how there's next to nothing in there that would answer my what and why questions. So being finally fed up, I'm asking here: what do people log, and why do they log it?

    Read the article

  • Integrating Magento with a simple static website.

    - by ExtraLean
    Magento is an awesomely powerful ecommerce platform. That said, it is also very complex, and I'd like to know if there is a relatively simple way to utilize Magento as our mISV site's backend to fulfill orders without actually "using" Magento's framework to build the site, run the site, etc. In other words, I don't want to use the built-in CMS, etc. since we have a static website already built. I'd just like our Buy Now buttons to utilize the checkout stuff, and would like to be able to use the back-end part to keep track of orders etc. I was able to accomplish this "fairly" easily with osCommerce, but Magento is proving to be a little more difficult to wrap my head around since I've only started looking at it for a few days now. I found another person asking this same exact question on the Magento wiki (along with several others in the forum), and none of them ever receive a reply for some reason. I noticed that there are may Magento experts on Stack Overflow, so I thought I'd give it a go here. This is an example of one question asked by someone on their wiki, and it captures the essence of what I'm trying to accomplish: Hi, as far as I understand, all shopping cart/eCommerce solutions I see are full featured PHP driven web sites. This means that all the pages the user interacts with, are server generated, and thus, the experience, is tied to the magento framework/workflow. I’d like to integrate bits and pieces of eCommerce/shopping cart in my existing website. Effectively, I’d like to have: 1) on a product information page, a “buy now/add to cart” button that adds to a cart 2) on every page, a view cart/checkout option 3) on a checkout page, with additional content already in place, having the magento “checkout” block integrated in the page (and not the entire page generated from Magento). Have any of you done this with Magento? This is for a simple one-product website so any advice you could share would be highly appreciated.

    Read the article

  • Spree customize/extend user roles and permissions

    - by swapnil
    I am trying to specify some custom roles in Spree for example role 'client' and extend the permissions to access the admin section for this role. This user will be able to access only those Product created by that user. Concept is letting a user with role 'client' manage only products and other certain Models. To start with I added CanCan plugin and defined a RoleAbility Class in role_ability.rb Just following this post : Spree Custom Roles Permissions class RoleAbility include CanCan::Ability def initialize(user) user ||= User.new if user.has_role? 'admin' can :manage, :all elsif user.has_role? 'client_admin' can :read, Product can :admin, Product end end end Added this to an initializer : config/initializers/spree.rb Ability.register_ability(RetailerAbility) Also extended admin_products_controller_decorator.rb :app/controllersadmin_products_controller_decorator.rb Admin::ProductsController.class_eval do def authorize_admin authorize! :admin, Product authorize! params[:action].to_sym, Product end end But I am getting flash message 'Authorisation Failure' Trying to find some luck, I referred following links A github gist for Customizing Spree Roles : https://gist.github.com/1277326 Here's a similar issue what I am facing : http://groups.google.com/group/spree-user/browse_thread/thread/1e819e10410d03c5/23b269e09c7ed47e All efforts in vain... Any pointers of what is going on here highly appreciated ? Thanks in advance.

    Read the article

  • FogBugz On Demand + online source control at low/no cost?

    - by quux
    I have a project in the free hosted FogBugz On Demand (FOD) product right now. This is great for feature/issue tracking. But I've been working from a codebase that is solely on my development machine. I'd like to collaborate with another guy who is thousands of miles from me. So we need a source control solution (SCM)! I use Visual Studio (2005, but can upgrade to later versions as needed). I am aware that FogBugz can integrate with a number of source control systems. So now the question is: which online SCM products can integrate well with FOD and VS? And which ones do so well at low or no cost, for a small code repository. And where might I find a proven recipe for putting this together. I'm open to other solutions which provide the same functionality. Please don't suggest Trac - I regard it highly, but I want the features of FOB (especially the evidence based scheduling) in my issue tracking solution. So really, I need to combine FOB + VS + some online SCM product into a low or no cost solution for two coders to collaborate on.

    Read the article

  • Using Python to get a CSV output for the following example.

    - by Az
    Hi there, I'm back again with my ongoing saga of Student-Project Allocation questions. Thanks to Moron (who does not match his namesake) I've got a bit of direction for an evaluation portion of my project. Going with the idea of the Assignment Problem and Hungarian Algorithm I would like to express my data in the form of a .csv file which would end up looking like this in spreadsheet form. This is based on the structure I saw here. | | Project 1 | Project 2 | Project 3 | |----------|-----------|-----------|-----------| |Student1 | | 2 | 1 | |----------|-----------|-----------|-----------| |Student2 | 1 | 2 | 3 | |----------|-----------|-----------|-----------| |Student3 | 1 | 3 | 2 | |----------|-----------|-----------|-----------| To make it less cryptic: the rows are the Students/Agents and the columns represent Projects/Task. Obviously ONE project can be assigned to ONE student. That, in short, is what my project is about. The fields represent the preference weights the students have placed upon the projects (ranging from 1 to 10). If blank, that student does not want that project and there's no chance of him/her being assigned such. Anyway, my data is stored within dictionaries. Specifically the students and projects dictionaries such that: students[student_id] = Student(student_id, student_name, alloc_proj, alloc_proj_rank, preferences) where preferences is in the form of a dictionary such that preferences[rank] = {project_id} and projects[project_id] = Project(project_id, project_name) I'm aware that sorted(students.keys()) will give me a sorted list of all the student IDs which will populate the row labels and sorted(projects.keys()) will give me the list I need to populate the column labels. Thus for each student, I'd go into their preferences dictionary and match the applicable projects to ranks. I can do that much. Where I'm failing is understanding how to create a .csv file. Any help, pointers or good tutorials will be highly appreciated.

    Read the article

  • Indexing community images for google image search

    - by Vittorio Vittori
    Hi, I'm trying to understand how can I do to let my site be reachable from google image search spiders. I like how last.fm solution, and I thought to use a technique like his staff do to let google find artists images on their pages. When I'm looking for an artist and I search it on google image search, as often as not I find an image from last.fm artists page, I make an example: If I search the band Pure Reason Revolution It brings me here, the artist's image page http://www.last.fm/music/Pure+Reason+Revolution/+images/4284073 Now if I take a look to the image file, i can see it's named: http://userserve-ak.last.fm/serve/500/4284073/Pure+Reason+Revolution+4.jpg so if I try to undertand how the service works I can try to say: http://userserve-ak.last.fm/serve/ the server who serve the images 500/ the selected size for the image 4284073/ the image id for database Pure+Reason+Revolution+4.jpg the image name I thought it's difficult to think the real filename for the image is Pure+Reason+Revolution+4.jpg for image overwrite problems when an user upload it, in fact if I digit: http://userserve-ak.last.fm/serve/500/4284073.jpg I probably find the real image location and filename With this tecnique the image is highly reachable from search engines and easily archived. My question is, does exist some guide or tutorial to approach on this kind of tecniques, or something similar?

    Read the article

  • Bluetooth in Java Mobile: Handling connections that go out of range

    - by Albus Dumbledore
    I am trying to implement a server-client connection over the spp. After initializing the server, I start a thread that first listens for clients and then receives data from them. It looks like that: public final void run() { while (alive) { try { /* * Await client connection */ System.out.println("Awaiting client connection..."); client = server.acceptAndOpen(); /* * Start receiving data */ int read; byte[] buffer = new byte[128]; DataInputStream receive = client.openDataInputStream(); try { while ((read = receive.read(buffer)) > 0) { System.out.println("[Recieved]: " + new String(buffer, 0, read)); if (!alive) { return; } } } finally { System.out.println("Closing connection..."); receive.close(); } } catch (IOException e){ e.printStackTrace(); } } } It's working fine for I am able to receive messages. What's troubling me is how would the thread eventually die when a device goes out of range? Firstly, the call to receive.read(buffer) blocks so that the thread waits until it receives any data. If the device goes out of range, it would never proceed onward to check if meanwhile it has been interrupted. Secondly, it would never close the connection, i.e. the server would not accept the device once it goes back in range. Thanks! Any ideas would be highly appreciated! Merry Christmas!

    Read the article

  • How long do you keep session cookies around for?

    - by user246114
    Hi, I'm implementing a web app, which uses sessions. I'm using GWT and app engine as my client/server, but I don't think they're doing anything really different than I would do with PHP and apache etc. When a user logs into my web app, I am using HttpSession to start a session for them. I get the session id like this: // From my login servlet: getThreadLocalRequest().getSession(false).getId(); I return the sessionId back to the client, and they store it in a cookie. The tutorial I'm using sets this cookie to 'expire' in two weeks: Cookie.write("sid", theSessionId, 1000 * 60 * 60 * 24 * 14); // two weeks Here's where I'm confused: if the cookie expires in two weeks, then my user will go along using the webapp happily, only to one day browse to my site and be shown a login screen. What are my options? Can I just set no expiration time for this cookie? That way the user would have to explicitly log out, otherwise they could just use the app forever without having to log back in? Or is there a better way to do this? I can't remember sites like Twitter having ever asked me to log back in again. I seem to be permanently logged in. Do they just set no expiration? The webapp isn't protecting any sort of highly sensitive data, so I don't mind leaving a cookie that doesn't expire, but it seems like there must be a better way? This is the tutorial I'm referencing: http://code.google.com/p/google-web-toolkit-incubator/wiki/LoginSecurityFAQ Thanks

    Read the article

  • I made a horrible loop.... help fix my logic please

    - by Webnet
    I know I'm doing this a bad way... but I'm having trouble seeing any alternatives. I have an array of products that I need to select 4 of randomly. $rawUpsellList is an array of all of the possible upsells based off of the items in their cart. Each value is a product object. I know this is horribly ugly code but I don't see an alternative now.... someone please put me out of my misery so this code doesn't make it to production..... $rawUpsellList = array(); foreach ($tru->global->cart->getItemList() as $item) { $product = $item->getProduct(); $rawUpsellList = array_merge($rawUpsellList, $product->getUpsellList()); } $upsellCount = count($rawUpsellList); $showItems = 4; if ($upsellCount < $showItems) { $showItems = $upsellCount; } $maxLoop = 20; $upsellList = array(); for ($x = 0; $x <= $showItems; $x++) { $key = rand(0, $upsellCount); if (!array_key_exists($key, $upsellList) && is_object($rawUpsellList[$key])) { $upsellList[$key] = $rawUpsellList[$key]; $x++; } if ($x == $maxLoop) { break; } } Posting this code was highly embarassing...

    Read the article

  • VS2010 renders controls JS awkwardly

    - by Juergen Hoffmann
    I have created a Website Project in VS2010. My Controls are not rendered correctly. The JS that is produced is not correctly formatted. Here is an example: protected void Page_PreRender(object sender, EventArgs e) { if (!IsPostBack) { objListBox.Attributes.Add("onchange", "Control_doPostBack('" + objListBox.ClientID + "','ListBox_OnClick'); return false;"); objListBox.Attributes.Add("onblur", "Control_doPostBack('" + trListbox.ClientID + "','ListBox_OnBlur'); return false;"); img.Attributes.Add("onclick", "Control_doPostBack('" + trListbox.ClientID + "','IMG_OnClick'); return false;"); } } and the responding control is rendered as: <select size="4" name="ctl00$PlaceHolder_Content$drop$objListBox" onchange="Control_doPostBack(&#39;PlaceHolder_Content_drop_objListBox&#39;,&#39;ListBox_OnClick&#39;); return false;setTimeout(&#39;__doPostBack(\&#39;ctl00$PlaceHolder_Content$drop$objListBox\&#39;,\&#39;\&#39;)&#39;, 0)" id="PlaceHolder_Content_drop_objListBox" onblur="Control_doPostBack(&#39;PlaceHolder_Content_drop_trListbox&#39;,&#39;ListBox_OnBlur&#39;); return false;" style="position:absolute;"> </select> As you can see, the ' are rendered to &#39 which screwes up the Browser. Is there a tweak to msbuild or inside the project properties? Any help is highly appreciated.

    Read the article

  • SQLite Databases and Grid Hosting

    - by jocull
    I'm considering moving my site from a GoDaddy shared hosting account to a Media Temple grid hosting account in anticipation of traffic. However, I first have some concerns with the grid hosting setup. My site stores a large personal set of data on a per-user basis (possibly 3-4MB per user). At this rate I was worried about blowing over a 1GB MySQL limit in no time. To deal with this I created distributed SQLite databases per user to store large data objects. It's worked wonderfully so far. SQLite is super fast and simple. I know that reading from and writing to files is different in a Grid Hosting environment. I need to know if this setup is going to cause serious problems. These databases are not (and will not be) highly trafficked. They are personal to the user and will only be touched maybe two locations at the same time (one updating the data hourly at the most, and one or more reading on demand). I'd like to keep this setup as getting additional space (beyond 4GB) on a MySQL database seems to be a real trouble point. Will Grid Hosting cause me serious problems? Thanks.

    Read the article

  • How do i add a new object with suds?

    - by Jerome
    I'm trying to use suds but have so far been unsuccessful at figuring this out. Hopefully it's something simple that i'm missing. Any help would be highly appreciated. This is supposed to be the raw soap message that i need to achieve: <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:api="http://api.service.apimember.soapservice.com/"> <soapenv:Header/> <soapenv:Body> <api:insertOrUpdateMemberByObj> <token>t67GFCygjhkjyUy8y9hkjhlkjhuii</token> <member> <dynContent> <entry> <key>FIRSTNAME</key> <value>hhhhbbbbb</value> </entry> </dynContent> <email>[email protected]</email> </member> </api:insertOrUpdateMemberByObj> </soapenv:Body> </soapenv:Envelope> So i use suds to create the member object: member = client.factory.create('member') produces: (apiMember){ attributes = (attributes){ entry[] = <empty> } } How exactly do i append an 'entry'? I try this: member.attributes.entry.append({'key':'FIRSTNAME','value':'test'}) and that produces this: (apiMember){ attributes = (attributes){ entry[] = { value = "test" key = "FIRSTNAME" }, } } However, what i actually need is: (apiMember){ attributes = (attributes){ entry[] = (entry) { value = "test" key = "FIRSTNAME" }, } } How do i achieve this?

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >