Search Results

Search found 13862 results on 555 pages for 'questions'.

Page 441/555 | < Previous Page | 437 438 439 440 441 442 443 444 445 446 447 448  | Next Page >

  • Maintaining state and data context between requests in ASP.NET + EF4

    - by Nick
    I have a EF4/ASP.NET web application that is structured to use POCOs and generic repositories, based essentially on this excellent article. The application is relatively sophisticated with one page that involves selection and linking of multiple entities to build up a complex user profile. This requires access to multiple entity types (20 or so) and associated repositories across multiple posts. When a repository is first accessed it uses the existing data context if exists, else it creates a new context. The problem is that if the lifetime of the context is only per-request (as suggested in the article) then you have to deal with multiple contexts and the complexity around detaching and attaching entities from contexts. My solution is to share the context between posts by creating a single View Model that includes all required repositories (initialised to share the same context) plus any associated data and store this model in a Session variable, retrieving from Session on subsequent page requests. Therefore maintaining the same context across all posts until the profile is saved. This works fine BUT I am concerned that I don't actually know exactly what is stored in the model session variable or more importantly the size of the Session variable. So two questions I suppose: firstly should I look for a better solution to handle the shared context across posts issue (any suggestions welcome)? And secondly what is actually stored in the Session when it includes a repository plus context? Any help appreciated!

    Read the article

  • How do I process a nested list?

    - by ddbeck
    Suppose I have a bulleted list like this: * list item 1 * list item 2 (a parent) ** list item 3 (a child of list item 2) ** list item 4 (a child of list item 2 as well) *** list item 5 (a child of list item 4 and a grand-child of list item 2) * list item 6 I'd like to parse that into a nested list or some other data structure which makes the parent-child relationship between elements explicit (rather than depending on their contents and relative position). For example, here's a list of tuples containing an item and a list of its children (and so forth): [('list item 1',), ('list item 2', [('list item 3',), [('list item 4', [('list item 5'),]] ('list item 6',)] I've attempted to do this with plain Python and some experimentation with Pyparsing, but I'm not making progress. I'm left with two major questions: What's the strategy I need to employ to make this work? I know recursion is part of the solution, but I'm having a hard time making the connection between this and, say, a Fibonacci sequence. I'm certain I'm not the first person to have done this, but I don't know the terminology of the problem to make fruitful searches for more information on this topic. What problems are related to this so that I can learn more about solving these kinds of problems in general?

    Read the article

  • Need some advice on Core Data modeling strategy

    - by Andy
    I'm working on an iPhone app and need a little advice on modeling the Core Data schema. My idea is a utility that allows the user to speed-dial their contacts using user-created rules based on the time of day. In other words, I would tell the app that my wife is commuting from 6am to 7am, at work from 7am to 4pm, commuting from 4pm to 5pm, and home from 5pm to 6am, Monday through Friday. Then, when I tap her name in my app, it would select the number to dial based on the current day and time. I have the user interface nearly complete (thanks in no small part to help I've received here), but now I've got some questions regarding the persistent store. The user can select start- and stop-times in 5-minute increments. This means there are 2,016 possible "time slots" in week (7 days * 24 hours * 12 5-minute intervals per hour). I see a few options for setting this up. Option #1: One array of time slots, with 2,016 entries. Each entry would be a dictionary containing a contact identifier and an associated phone number to dial. I think this means I'd need a "Contact" entity to store the contact information, and a "TimeSlot" entity for each of the 2,016 possible time slots. Option #2: Each Contact has its own array of time slots, each with 2,016 entries. Each array entry would simply be a string indicating which phone number to dial. Option #3: Each Contact has a dictionary of time slots. An entry would only be added to the dictionary for time slots with an active rule. If a search for, say, time slot 1,299 (Friday 12:15pm) didn't find a key @"1299" in the dictionary, then a default number would be dialed instead. I'm not sure any of these is the "right" way or the "best" way. I'm not even sure I need to use Core Data to manage it; maybe just saving arrays would be simpler. Any input you can offer would be appreciated.

    Read the article

  • Internet Radio Station for University

    - by ryan
    I am trying to help my University Student Radio station rethink the setup of the way they stream music, but I have some questions regarding the use of Ubuntu to stream music. Currently, the radio station uses two windows machines: one of which is used to stream the radio station and serve the website, and the other is used by rotating djs to select songs and create playlists. The computer used by djs feeds mono into the sound card of the server and the server streams the feed online. -Ideally I would like to maintain a two-computer setup: One computer as server, and another that is used to select and play music by rotating djs. -I would like to use Ubuntu for the server. -I would like to use Windows for the other machine. -The server should be able to stream song information. First, is there a way to somehow get the song information from an analog feed? Second, what is the best streaming server for radio? I have encountered shoutcast, icecast, and darwin, but I don't know where to begin in attempting to gauge them. Finally, if anyone has any tips or pointers about small internet radio station management/ setup they would be appreciated as this is my first radio station, and I am eager to hear of past experiences.

    Read the article

  • Is jQuery always the answer?

    - by Kibbee
    I've come across a couple questions, such as this one, and I really have to wonder why "Use jQuery" seems to be the answer when somebody asks how to do something in JavaScript. I understand that jQuery can save you a lot of time, and can help you out a lot, especially when you are doing a lot of fancy JavaScript in your site. However, in instances like this, and in many other instances, it seems like it's just jumping around the problem instead of answering the question. I also feel like this builds too much dependency into libraries. I've seen way too many developers that simply rely too much on libraries, and if they encounter a situation where they didn't have the library, they would be completely unable to function. I feel like there are already enough developers who don't know JavaScript, without just telling everybody to not learn JavaScript, and use jQuery. So, just to reiterate the question. Do you think there's too much of a tendency to use jQuery, for small pieces of JavaScript, when most of the functionality of jQuery isn't being used. Should developers be fluent in the use of bare JavaScript so they don't get too dependent on using libraries? [Additional related conversation topic] Does the existence of jQuery give too much slack to web browser developers who write the JavaScript engines? If we just have workarounds to cover all the inconsistencies in JavaScript, what pressure is there on browser makers to ensure that their JavaScript library works as it should. I feel like this extrapolates the same problem discussed in SO Podcast #36 of "be conservative in what you send, liberal in what you accept". By being so liberal with bad JavaScript engines, and using a common library to work around the flaws, we are promoting their use, and extending the problem.

    Read the article

  • How to avoid loading a LINQ to SQL object twice when editting it on a website.

    - by emzero
    Hi guys I know you are all tired of this Linq-to-Sql questions, but I'm barely starting to use it (never used an ORM before) and I've already find some "ugly" things. I'm pretty used to ASP.NET Webforms old school developing, but I want to leave that behind and learn the new stuff (I've just started to read a ASP.NET MVC book and a .NET 3.5/4.0 one). So here's is one thing I didn't like and I couldn't find a good alternative to it. In most examples of editing a LINQ object I've seen the object is loaded (hitting the db) at first to fill the current values on the form page. Then, the user modify some fields and when the "Save" button is clicked, the object is loaded for second time and then updated. Here's a simplified example of ScottGu NerdDinner site. // // GET: /Dinners/Edit/5 [Authorize] public ActionResult Edit(int id) { Dinner dinner = dinnerRepository.GetDinner(id); return View(new DinnerFormViewModel(dinner)); } // // POST: /Dinners/Edit/5 [AcceptVerbs(HttpVerbs.Post), Authorize] public ActionResult Edit(int id, FormCollection collection) { Dinner dinner = dinnerRepository.GetDinner(id); UpdateModel(dinner); dinnerRepository.Save(); return RedirectToAction("Details", new { id=dinner.DinnerID }); } As you can see the dinner object is loaded two times for every modification. Unless I'm missing something about LINQ to SQL caching the last queried objects or something like that I don't like getting it twice when it should be retrieved only one time, modified and then comitted back to the database. So again, am I really missing something? Or is it really hitting the database twice (in the example above it won't harm, but there could be cases that getting an object or set of objects could be heavy stuff). If so, what alternative do you think is the best to avoid double-loading the object? Thank you so much, Greetings!

    Read the article

  • Can I spead out a long running stored proc accross multiple CPU's?

    - by Russ
    [Also on SuperUser - http://superuser.com/questions/116600/can-i-spead-out-a-long-running-stored-proc-accross-multiple-cpus] I have a stored procedure in SQL server the gets, and decrypts a block of data. ( Credit cards in this case. ) Most of the time, the performance is tolerable, but there are a couple customers where the process is painfully slow, taking literally 1 minute to complete. ( Well, 59377ms to return from SQL Server to be exact, but it can vary by a few hundred ms based on load ) When I watch the process, I see that SQL is only using a single proc to perform the whole process, and typically only proc 0. Is there a way I can change my stored proc so that SQL can multi-thread the process? Is it even feasible to cheat and to break the calls in half, ( top 50%, bottom 50% ), and spread the load, as a gross hack? ( just spit-balling here ) My stored proc: USE [Commerce] GO /****** Object: StoredProcedure [dbo].[GetAllCreditCardsByCustomerId] Script Date: 03/05/2010 11:50:14 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[GetAllCreditCardsByCustomerId] @companyId UNIQUEIDENTIFIER, @DecryptionKey NVARCHAR (MAX) AS SET NoCount ON DECLARE @cardId uniqueidentifier DECLARE @tmpdecryptedCardData VarChar(MAX); DECLARE @decryptedCardData VarChar(MAX); DECLARE @tmpTable as Table ( CardId uniqueidentifier, DecryptedCard NVarChar(Max) ) DECLARE creditCards CURSOR FAST_FORWARD READ_ONLY FOR Select cardId from CreditCards where companyId = @companyId and Active=1 order by addedBy desc --2 OPEN creditCards --3 FETCH creditCards INTO @cardId -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN --OPEN creditCards DECLARE creditCardData CURSOR FAST_FORWARD READ_ONLY FOR select convert(nvarchar(max), DecryptByCert(Cert_Id('Oh-Nay-Nay'), EncryptedCard, @DecryptionKey)) FROM CreditCardData where cardid = @cardId order by valueOrder OPEN creditCardData FETCH creditCardData INTO @tmpdecryptedCardData -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN print 'CreditCardData' print @tmpdecryptedCardData set @decryptedCardData = ISNULL(@decryptedCardData, '') + @tmpdecryptedCardData print '@decryptedCardData' print @decryptedCardData; FETCH NEXT FROM creditCardData INTO @tmpdecryptedCardData -- fetch next END CLOSE creditCardData DEALLOCATE creditCardData insert into @tmpTable (CardId, DecryptedCard) values ( @cardId, @decryptedCardData ) set @decryptedCardData = '' FETCH NEXT FROM creditCards INTO @cardId -- fetch next END select CardId, DecryptedCard FROM @tmpTable CLOSE creditCards DEALLOCATE creditCards

    Read the article

  • Multithreading and Interrupts

    - by Nicholas Flynt
    I'm doing some work on the input buffers for my kernel, and I had some questions. On Dual Core machines, I know that more than one "process" can be running simultaneously. What I don't know is how the OS and the individual programs work to protect collisions in data. There are two things I'd like to know on this topic: (1) Where do interrupts occur? Are they guaranteed to occur on one core and not the other, and could this be used to make sure that real-time operations on one core were not interrupted by, say, file IO which could be handled on the other core? (I'd logically assume that the interrupts would happen on the 1st core, but is that always true, and how would you tell? Or perhaps does each core have its own settings for interrupts? Wouldn't that lead to a scenario where each core could react simultaneously to the same interrupt, possibly in different ways?) (2) How does the dual core processor handle opcode memory collision? If one core is reading an address in memory at exactly the same time that another core is writing to that same address in memory, what happens? Is an exception thrown, or is a value read? (I'd assume the write would work either way.) If a value is read, is it guaranteed to be either the old or new value at the time of the collision? I understand that programs should ideally be written to avoid these kinds of complications, but the OS certainly can't expect that, and will need to be able to handle such events without choking on itself.

    Read the article

  • strict aliasing and alignment

    - by cooky451
    I need a safe way to alias between arbitrary POD types, conforming to ISO-C++11 explicitly considering 3.10/10 and 3.11 of n3242 or later. There are a lot of questions about strict aliasing here, most of them regarding C and not C++. I found a "solution" for C which uses unions, probably using this section union type that includes one of the aforementioned types among its elements or nonstatic data members From that I built this. #include <iostream> template <typename T, typename U> T& access_as(U* p) { union dummy_union { U dummy; T destination; }; dummy_union* u = (dummy_union*)p; return u->destination; } struct test { short s; int i; }; int main() { int buf[2]; static_assert(sizeof(buf) >= sizeof(double), ""); static_assert(sizeof(buf) >= sizeof(test), ""); access_as<double>(buf) = 42.1337; std::cout << access_as<double>(buf) << '\n'; access_as<test>(buf).s = 42; access_as<test>(buf).i = 1234; std::cout << access_as<test>(buf).s << '\n'; std::cout << access_as<test>(buf).i << '\n'; } My question is, just to be sure, is this program legal according to the standard?* It doesn't give any warnings whatsoever and works fine when compiling with MinGW/GCC 4.6.2 using: g++ -std=c++0x -Wall -Wextra -O3 -fstrict-aliasing -o alias.exe alias.cpp * Edit: And if not, how could one modify this to be legal?

    Read the article

  • How to make a good programming interview?

    - by luckyluke
    I am doing interviews with from time to time to recruit some not bad people. And I really think I AM NOT doing to correct Job. I work in a company when We have to do a lot o DB programming, .NET programming, Java programming, so we need people who are open minded and not focused on a particular tech. Afterall language is a notation, You have to understand what is going under the hood. I ask people about their project, ask them some coding questions (believe me a SQL question involving a CROSS JOIN is hard), let them write some code, ask them about oo design, ask them how they update their knowledge, and stay up to date, do they have FUN when they code (at least sometimes). Hell I even give them a coding solution for home (3 hours max) to see how they think and code. And yet my hit rate at hiring junior member (those who live over the initial 3 months) is just about 33%. So my question, how do YOU make the good interviews, because I think my hit rate is to low? Do you have any best-practices(should be at least 60-70%)? p.s. And i noticed that: the best programmers are lazy, but motivated, just being lazy is not enough:) But people who write the best code are attentive to details:)

    Read the article

  • Using Active Directory to authenticate users in a WWW facing website

    - by Basiclife
    Hi, I'm looking at starting a new web app which needs to be secure (if for no other reason than that we'll need PCI accreditation at some point). From previous experience working with PCI (on a domain), the preferred method is to use integrated windows authentication which is then passed all the way through the app to the database. This allows for better auditing as well as object-level permissions (ie an end user can't read the credit card table). There are advantages in that even if someone compromises the webserver, they won't be able to glean any additional information from the database. Also, the webserver isn't storing any database credentials (beyond perhaps a simple anonymous user with very few permissions) So, now I'm looking at the new web app which will be on the public internet. One suggestion is to have a Active Directory server and create windows accounts on the AD for each user of the site. These users will then be placed into the appropriate NT groups to decide which DB permissions they should have (and which pages they can access). ASP already provides the AD membership provider and role provider so this should be fairly simple to implement. There are a number of questions around this - Scalability, reliability, etc... and I was wondering if there is anyone out there with experience of this approach or, even better, some good reasons why to do it / not to do it. Any input appreciated Regards Basiclife

    Read the article

  • What is the best way to handle autorotation with multiple subview?

    - by thangnguyen
    I am learning and programing an application. I do this project based on what I learned from the book Beginning iphone 3 development. I have two main questions here: I would like to create a multi utility application so I need multiple-view. I have a main view controller which will control switching between views. In this example I have two views A and B. I have 2 view controller A and B which will handles all of events on these 2 views. I have 2 nib files viewA.xib and viewB.xib. One of the uitility is reading PDF. So I create another class which handle the PDF file which can load a PDF page call PDFview. From Interface Builder, I selected class identity for view of the viewB.xib as PDFView class. The result is I can switching between View A and view B. View B will display the content of the PDF page. I am not sure if my solution is right or wrong but now I don't know how to handle the autorotation. The rotation will active the view controller B. But the PDFView handle how to display the PDF on the View. Could you please tell me how I should handle this in a right way? Second question: Should I create the subview automatically? In case I need to do the swipe page animation, how can I do that? I think that I need to load another subview so I can do the animation when swap view. But I think this solution will waste the resource. I can just load another page of the PDF, but in this case I don't know how to use animation? Please tell me how I should solve this? I highly appreciate your time reading and answering my question. Thang Nguyen

    Read the article

  • jquery filter .not()

    - by FFish
    I have a form with image thumbnails to select with checkboxes for downloading. I want an array with the images in jQuery for an Ajax call. 2 questions: - On the top of the table there is a checkbox to toggle all checkboxes that I want to exclude from the mapping. I had a look at jQuery's .not() but I can't implement it with the :checkbox selector - is the following example code correct? $(document).ready(function() { $('#myform').submit(function() { var images = $("input:checkbox", this).map(function() { return $(this).attr("name"); }).get().join(); alert(products); // outputs: ",check1,check2,check3" return false; // cancel submit action by returning false }); }); // end doc ready HTML: <form id="myform" action="" > <input type="checkbox" id="toggleCheck" onclick="toggleSelectAll()" checked="checked" ><br /> <input type="checkbox" name="001.jpg" checked="checked" /><br /> <input type="checkbox" name="002.jpg" checked="checked" /><br /> <input type="checkbox" name="003.jpg" checked="checked" /><br /> <br /> <input type="submit" value="download" > </form>

    Read the article

  • How do I add IIS Virtual Directories and arbitrary files in TFS Solution

    - by chriscena
    We have a web portal product from which we customize portals from customers. We use the precompiled web app and create a virtual directory (vd) where the customization resides. In addition to this we do some changes web.config in the web app folder. We would obviously like to keep these customizations under TFS source control. When I try to add the precompiled web app (which I don't want to add to source control), a warning tells me that the vds cannot be added. If I only add the folder that is referenced to by the vd, I lose the references to assemblies in the precompiled web app. My questions are: How do I structure a solution for adding IIS (sub application level) virtual directories and still retain the references to assemblies? Is it possible to add other directories/files from the web application level (like App_Theme, web.config etc.) to the solution? Since we already use Visual Source Safe, we have established a tree structure for each customization project: Project Root | |-Custom Sql | |-Custom Portal Files (which is added as a virtual directory) | |-Other Customizations I could probably do a lot of this manually through the source control explorer, but I'd like to have everything done through a solution. I've followed the instructions using this article: http://msdn.microsoft.com/en-us/library/bb668986.aspx, but this doesn't address the exact problem that I have. Oh, and we are currently using Visual Source Safe for portal customizaton, but are eager to make the move to TFS. TIA

    Read the article

  • Is this a good way to expose generic base class methods through an interface?

    - by Nate Heinrich
    I am trying to provide an interface to an abstract generic base class. I want to have a method exposed on the interface that consumes the generic type, but whose implementation is ultimately handled by the classes that inherit from my abstract generic base. However I don't want the subclasses to have to downcast to work with the generic type (as they already know what the type should be). Here is a simple version of the only way I can see to get it to work at the moment. public interface IFoo { void Process(Bar_base bar); } public abstract class FooBase<T> : IFoo where T : Bar_base { abstract void Process(T bar); // Explicit IFoo Implementation void IFoo.Process(Bar_base bar) { if (bar == null) throw new ArgumentNullException(); // Downcast here in base class (less for subclasses to worry about) T downcasted_bar = bar as T; if (downcasted_bar == null) { throw new InvalidOperationException( string.Format("Expected type '{0}', not type '{1}'", T.ToString(), bar.GetType().ToString()); } //Process downcasted object. Process(downcasted_bar); } } Then subclasses of FooBase would look like this... public class Foo_impl1 : FooBase<Bar_impl1> { void override Process(Bar_impl1 bar) { //No need to downcast here! } } Obviously this won't provide me compile time Type Checking, but I think it will get the job done... Questions: 1. Will this function as I think it will? 2. Is this the best way to do this? 3. What are the issues with doing it this way? 4. Can you suggest a different approach? Thanks!

    Read the article

  • How to emulate "-lib foo.jar" from _within_ build.xml

    - by Thorbjørn Ravn Andersen
    By specifying "-lib foo.jar" to ant I get the behaviour that the classes in foo.jar is added to the ant classloader and are available for various tasks taking a class name argument. I'd like to be able to specify the same behaviour but only from inside build.xml (so we can do this on a vanilla ant). For taskdefs we have functioning code looking like: <taskdef resource="net/sf/antcontrib/antlib.xml" description="for/foreach tasks"> <classpath> <pathelement location="${active.workspace}/ant-contrib-1.X/lib/ant-contrib.jar" /> </classpath> </taskdef> where the definition is completely provided from the ant-contrib.jar listed. What is the equivalent mechanism for the "global" ant classpath? (I have thought out that this is the way to get <javac> use ecj-3.5.jar to compile with on a JRE - http://stackoverflow.com/questions/2364006/specifying-the-eclipse-compiler-completely-from-within-build-xml - in a way compatible with ant 1.7. Better suggestions are welcome :) EDIT: It appears that the about-to-be-released version 1.0 of ant4eclipse includes ecj. This does not answer the question, but may solve my basic problem.

    Read the article

  • Execute code on assembly load

    - by Dmitriy Matveev
    I'm working on wrapper for some huge unmanaged library. Almost every of it's functions can call some error handler deeply inside. The default error handler writes error to console and calls abort() function. This behavior is undesirable for managed library, so I want to replace the default error handler with my own which will just throw some exception and let program continue normal execution after handling of this exception. The error handler must be changed before any of the wrapped functions will be called. The wrapper library is written in managed c++ with static linkage to wrapped library, so nothing like "a type with hundreds of dll imports" is present. I also can't find a single type which is used by everything inside wrapper library. So I can't solve that problem by defining static constructor in one single type which will execute code I need. I currently see two ways of solving that problem: Define some static method like Library.Initialize() which must be called one time by client before his code will use any part of the wrapper library. Find the most minimal subset of types which is used by every top-level function (I think the size of this subset will be something like 25-50 types) and add static constructors calling Library.Initialize (which will be internal in that scenario) to every of these types. I've read this and this questions, but they didn't helped me. Is there any proper ways of solving that problem? Maybe some nice hacks available?

    Read the article

  • get value of checked[ALL] or unchecked box jquery

    - by python
    I have read this. http://stackoverflow.com/questions/2048485/jquery-checkbox <input type="checkbox" name="checkGroup" id="all"> <input type="checkbox" name="checkGroup" id="one" value="1"> <input type="checkbox" name="checkGroup" id="two" value="2"> <input type="checkbox" name="checkGroup" id="three" value="3"> <input type="hidden" name="storeCheck" value=""> $(function(){ $("#all").click(function(){ $("input:checkbox[name='checkGroup']").attr("checked",$(this).attr("checked")); }); $("input:checkbox[name='checkGroup']:not('#all')").click ( function(){ var totalCheckboxes = $("input:checkbox[name='checkGroup']:not('#all')").length; var checkedCheckboxes = $("input:checkbox[name='checkGroup']:not('#all'):checked").length; if ( totalCheckboxes === checkedCheckboxes ) { $("#all").attr("checked" , true ); } else { $("#all").attr("checked" , false ); } }); }); Demo I am trying to get the value of the checkboxs are checked as an array. for example if I checked All Get value array_check = 1,2,3 and passed this array to hidden name="storeCheck" otherwise: Get value of array_check( checkboxs checked ).and passed this array to hidden name="storeCheck"

    Read the article

  • What are the reasons to store documents into DBMS when using Alfresco DMS

    - by Julia
    Hello guys! I have interview for an internship with company that wants to implement document management system and they are considering on the first place open source solutions, their top choice being Alfresco, but decision is still not final, part of my work there would be to investigate is Alfresco the best solution. What I have seen from project description, is that they would implement Alfresco with MySQL database, and not to use DBMS just for document metadata and indexing, but they actually want to store documents inside. By company profile, type of documents would be mostly PDF and .doc, not images. I have researched a bit, and I have read all the topics here related to storing files into the database, not to duplicate a question. So from what I understand, storing BLOBS is generally not recomendable, and by the profile of the company and their legal obligations with archiving, I see they will have to store larger amount of docs. I would like to be ready as much as I can for the interview and that is why I would like your opinion on these questions: 1) What will be your reasons for deciding to store documents into the DBMS, (especially having in mind that you are installing Alfresco, which stores files in the FS)??? 2) Do you have any experiences with storing documents into the MySQL database specifically??? All the help is very much appreciated, I am really excited about interview and really want this internship, so this is one of things i really want to understand before!! Thank you!!!!

    Read the article

  • Make browser to go back by reloading page 1st and then scrolling it back again too

    - by Marco Demaio
    EXPLAINING WHAT I'M TRYING TO SOLVE: I have a webpage (file_list.php) showing a list of files, and next to each file there is a button to delete it. When user press the DELETE button close to a certain file name, the browser goes to a script called delete_file.php that deletes the file and then it tells browser to go back to the file_list.php delete_file.php uses a simple header("Location: file_list.php”); to go back to file_list.php When browser goes back to file_list.php it reloads the page, but it DOES NOT scroll it back again to where the user was before. So let's say the user scrolled the files list and deleted the last file, when the browser shows again the page file_list.php it won't be scrolled to the bottom of the page again. THE WORKAROUND I CAME OUT WITH: I found a strange way to work around this, basically instead of using header("Location: file_list.php”); in delete_file.php I simply use a javascript call window.history.go(-1). This workaround works perfectly when user is in session (simply using PHP session_start function): the browser RELOADS the file_list.php page and then scrolls it also bask to where it was before. But if the user is NOT in session the browser scrolls the page but IT DOES NOT RELOAD IT before, so the user would still see the file he deleted in the file list. THE QUESTIONS Do you know how to reproduce the behavior of the browser when goes back being in session even if we are not in session? Do you know a way out of this, even another way of solving this matter? Thanks! *I know I could use AJAX to delete the file so I would not have to go every time to delete_file.php, but this is not the answer*.

    Read the article

  • SD card initialization SPI

    - by Openavr
    Hi People I saw a lot of infos about MMC/SD cards and I tried to make a lib to read this.. (modifying the Procyon Avrlib) But I have some problems here. I Don´t change the original code and tried here. My problem is about the init of SD card. I have 2 here, a 256mb and another 1GB. I send the init commands like this order: CMD0, CMD55, ACMD41, CMD1 But SD 256mb returns me only 0x01 response for each command.. the CMD1 i send a lot of times, SD 256mb always returs only 0x01.. never 0x00. The SD 1GB is more crazy... CMD0 returns with 0x01 ..nice but the CMD55 response with 0x05... another times responde with 0xC1... and another ones response 0xF0 with a 0x5F in the next interation... Around internet have infos and examples, but a bit of confused. Here in my project, I must use 1GB card and I´m trying with a MicroSD with a SD Adapter (I think that this is not the problem). Please any help are apreciate! Regards PS - my problem like the problem from this guy http://stackoverflow.com/questions/2365897/initializing-sd-card-in-spi-issues but his solution don´t solved my problem.. The SD1GB returns only 0x01 ever... :cry:

    Read the article

  • How to select a rectangle from List<Rectangle[]> with Linq

    - by dboarman
    I have a list of DrawObject[]. Each DrawObject has a Rectangle property. Here is my event: List<Canvas.DrawObject[]> matrix; void Control_MouseMove ( object sender, MouseEventArgs e ) { IEnumerable<Canvas.DrawObject> tile = Enumerable.Range( 0, matrix.Capacity - 1) .Where(row => Enumerable.Range(0, matrix[row].Length -1) .Where(column => this[column, row].Rectangle.Contains(e.Location))) .????; } I am not sure exactly what my final select command should be in place of the "????". Also, I was getting an error: cannot convert IEnumerable to bool. I've read several questions about performing a linq query on a list of arrays, but I can't quite get what is going wrong with this. Any help? Edit Apologies for not being clear in my intentions with the implementation. I intend to select the DrawObject that currently contains the mouse location.

    Read the article

  • Django sphinx works only after app restart.

    - by Lhiash
    Hi, I've set up django-sphinx in my project, which works perfectly only for some time. Later it always returns empty result set. Surprisingly restarting django app fixes it. And search works again but again only for short time (or very limiter number of queries). Heres my sphinx.conf: source src_questions { # data source type = mysql sql_host = xxxxxx sql_user = xxxxxx #replace with your db username sql_pass = xxxxxx #replace with your db password sql_db = xxxxxx #replace with your db name # these two are optional sql_port = xxxxxx #sql_sock = /var/lib/mysql/mysql.sock # pre-query, executed before the main fetch query sql_query_pre = SET NAMES utf8 # main document fetch query sql_query = SELECT q.id AS id, q.title AS title, q.tagnames AS tags, q.html AS text, q.level AS level \ FROM question AS q \ WHERE q.deleted=0 \ # optional - used by command-line search utility to display document information sql_query_info = SELECT title, id, level FROM question WHERE id=$id sql_attr_uint = level } index questions { # which document source to index source = src_questions # this is path and index file name without extension # you may need to change this path or create this folder path = /home/rafal/core_index/index_questions # docinfo (ie. per-document attribute values) storage strategy docinfo = extern # morphology morphology = stem_en # stopwords file #stopwords = /var/data/sphinx/stopwords.txt # minimum word length min_word_len = 3 # uncomment next 2 lines to allow wildcard (*) searches min_infix_len = 1 enable_star = 1 # charset encoding type charset_type = utf-8 } # indexer settings indexer { # memory limit (default is 32M) mem_limit = 64M } # searchd settings searchd { # IP address on which search daemon will bind and accept # optional, default is to listen on all addresses, # ie. address = 0.0.0.0 address = 127.0.0.1 # port on which search daemon will listen port = 3312 # searchd run info is logged here - create or change the folder log = ../log/sphinx.log # all the search queries are logged here query_log = ../log/query.log # client read timeout, seconds read_timeout = 5 # maximum amount of children to fork max_children = 30 # a file which will contain searchd process ID pid_file = searchd.pid # maximum amount of matches this daemon would ever retrieve # from each index and serve to client max_matches = 1000 } and heres my search part from views.py: content = Question.search.query(keywords) if level: content = content.filter(level=level)#level is array of integers There are no errors in any logs, it just isnt returning any results. All help would be most appreciated.

    Read the article

  • Are AJAX sites crawlable by search engines?

    - by frankadelic
    I had always assumed that AJAX-driven content was invisible to search engines. (i.e. content inserted into the DOM via XMLHTTPRequest) For example, in this site, the main content is loaded via AJAX request by the browser: http://www.trustedsource.org/query/terra.cl ...if you view this page with Javascript disabled, the main content area is blank. However, Google cache shows the full content after the AJAX load: http://74.125.155.132/search?q=cache:JqcT6EVDHBoJ:www.trustedsource.org/query/terra.cl+http://www.trustedsource.org/query/terra.cl&cd=1&hl=en&ct=clnk&gl=us So, apparently search engines do index content loaded by AJAX. Questions: Is this a new feature in search engines? Most postings on the web indicate that you have to publish duplicate static HTML content for search engines to find them. Are there any tricks to get an AJAX-driven content to be crawled by search engines (besides creating duplicate static HTML content). Will the AJAX-driven content be indexed if it is loaded from a separate subdomain? How about a separate domain?

    Read the article

  • GET params in ruby-on-rails project - best practices?

    - by Lynn C
    I've inherited a little rails app and I need to extend it slightly. It's actually quite simple, but I want to make sure I'm doing it the right way... If I visit myapp:3000/api/persons it gives me a full list of people in XML format. I want to pass param in the URL so that I can return users that match the login or the email e.g. yapp:3000/api/persons?login=jsmith would give me the person with the corresponding login. Here's the code: def index if params.size > 2 # We have 'action' & 'controller' by default if params['login'] @person = [Person.find(:first, :conditions => { :login => params['login'] })] elsif params['email'] @persons = [Person.find(:first, :conditions => { :email => params['email'] })] end else @persons = Person.find(:all) end end Two questions... Is it safe? Does ActiveRecord protect me from SQL injection attacks (notice I'm trusting the params that are coming in)? Is this the best way to do it, or is there some automagical rails feature I'm not familiar with?

    Read the article

< Previous Page | 437 438 439 440 441 442 443 444 445 446 447 448  | Next Page >