Search Results

Search found 11823 results on 473 pages for 'save'.

Page 386/473 | < Previous Page | 382 383 384 385 386 387 388 389 390 391 392 393  | Next Page >

  • Decoding a jpg in the background in WP7

    - by Shahar Prish
    I have a bunch of apps in the marketplace, and so far I have been able, by changing my functionality or going the extra mile, to work around the issue of being unable to decode a jpg in the background into a WriteableBitmap. I am finding a situation where I can't think of good ways to "work around" the issue. I need to decode the image I get from MediaLibrary, reduce it's resolution to something managable (800x800), rotate it potentially and save to local storage. By far, the thing that takes the most time (80%) is decoding the bitmap to 800x800 - it takes between 700ms to 1000 ms. A user may add 7-10 images when starting, which translates to ~10 seconds of waiting for the images being added. I tried doing this lazily, but at some point you need to pay the piper and the app essentially stutters for ~1000ms at that point and the experience is not great. Is there an alternative I am missing for loading the image in the background somehow? (Note on why CreateOptions.BackgroundCreation is no good for me: It loads the image into a BitmapImage which is great if you want to just use it, but not so great for what I need to do which is create a copy in Isolated Storage).

    Read the article

  • First site going live real soon. Last minute questions

    - by user156814
    I am really close to finishing up on a project that I've been working on. I have done websites before, but never on my own and never a site that involved user generated data. I have been reading up on things that should be considered before you go live and I have some questions. 1) Staging... (Deploying updates without affecting users). I'm not really sure what this would entail, since I'm sure that any type of update would affect users in some way. Does this mean some type of temporary downtime for every update? can somebody please explain this and a solution to this as well. 2) Limits... I'm using the Kohana framework and I'm using the Auth module for logging users in. I was wondering if this already has some type of limit (on login attempts) built in, and if not, what would be the best way to implement this. (save attempts in database, cookie, etc.). If this is not whats meant by limits, can somebody elaborate. 3) Caching... Like I said, this is my first site built around user content. Considering that, should I cache it? 4) Back Ups... How often should I backup my (MySQL) database, and how should I back it up (MySQL export?). The site is currently up, yet not finished, if anybody wants to look at it and see if something pops out to you that should be looked at/fixed. Clashing Thoughts. If there is anything else I overlooked, thats not already in the list linked to above, please let me know. Thanks.

    Read the article

  • Mysql Database Question about Large Columns

    - by murat
    Hi, I have a table that has 100.000 rows, and soon it will be doubled. The size of the database is currently 5 gb and most of them goes to one particular column, which is a text column for PDF files. We expect to have 20-30 GB or maybe 50 gb database after couple of month and this system will be used frequently. I have couple of questions regarding with this setup 1-) We are using innodb on every table, including users table etc. Is it better to use myisam on this table, where we store text version of the PDF files? (from memory usage /performance perspective) 2-) We use Sphinx for searching, however the data must be retrieved for highlighting. Highlighting is done via sphinx API but still we need to retrieve 10 rows in order to send it to Sphinx again. This 10 rows may allocate 50 mb memory, which is quite large. So I am planning to split these PDF files into chunks of 5 pages in the database, so these 100.000 rows will be around 3-4 million rows and couple of month later, instead of having 300.000-350.000 rows, we'll have 10 million rows to store text version of these PDF files. However, we will retrieve less pages, so again instead of retrieving 400 pages to send Sphinx for highlighting, we can retrieve 5 pages and it will have a big impact on the performance. Currently, when we search a term and retrieve PDF files that have more than 100 pages, the execution time is 0.3-0.35 seconds, however if we retrieve PDF files that have less than 5 pages, the execution time reduces to 0.06 seconds, and it also uses less memory. Do you think, this is a good trade-off? We will have million of rows instead of having 100k-200k rows but it will save memory and improve the performance. Is it a good approach to solve this problem and do you have any ideas how to overcome this problem? The text version of the data is used only for indexing and highlighting. So, we are very flexible. Thanks,

    Read the article

  • Routing problem with calling a new method without an ID

    - by alkaloids
    I'm trying to put together a form_tag that edits several Shift objects. I have the form built properly, and it's passing on the correct parameters. I have verified that the parameters work with updating the objects correctly in the console. However, when I click the submit button, I get the error: ActiveRecord::RecordNotFound in ShiftsController#update_individual Couldn't find Shift without an ID My route for the controller it is calling looks like this looks like this: map.resources :shifts, :collection => { :update_individual => :put } The method in ShiftsController is this: def update_individual Shift.update(params[:shifts].keys, params[:shifts].values) flash[:notice] = "Schedule saved" end The relevant form parts are these: <% form_tag( update_individual_shifts_path ) do %> ... (fields for...) <%= submit_tag "Save" %> <% end %> Why is this not working? If I browse to the url: "http://localhost:3000/shifts/update_individual/5" (or any number that corresponds to an existing shift), I get the proper error about having no parameters set, but when I pass parameters without an ID of some sort, it errors out. How do I make it stop looking for an ID at the end of the URL?

    Read the article

  • Redirect uploaded files to another server, using nginx

    - by Serg ikS
    I am creating a web service of scheduled posts to some soc. network.Need help dealing with file uploads under high traffic. Process overview: User uploads files to SomeServer (not mine). SomeServer then responds with a JSON string. My web app should store that JSON response. Opt. 1 — Save, cURL POST, delete tmp The stupid way I made it work: User uploads files to MyWebApp; MyWebApp cURL's the file further to SomeServer, getting the response. Opt.2 — JS magic The smart way it could be perfect: User uploads the file directly to SomeServer, from within an iFrame; MyWebApp gets the response through JavaScript. But this is(?) impossible due to the 'Same Origin Policy', isn't it? Opt. 3 — nginx proxying? The better way for a production server: User uploads files to MyWebApp; nginx intercepts the file uploads and sends them directly to the SomeServer; JSON response is also intercepted by nginx and processed by MyWebApp. Does this make any sense, and what would be the nginx config for, say, /fileupload Location to proxy it to SomeServer ?

    Read the article

  • Copy Small Bitmaps on to Large Bitmap with Transparency Blend: What is faster than graphics.DrawImag

    - by Glenn
    I have identified this call as a bottleneck in a high pressure function. graphics.DrawImage(smallBitmap, x , y); Is there a faster way to blend small semi transparent bitmaps into a larger semi transparent one? Example Usage: XY[] locations = GetLocs(); Bitmap[] bitmaps = GetBmps(); //small images sizes vary approx 30px x 30px using (Bitmap large = new Bitmap(500, 500, PixelFormat.Format32bppPArgb)) using (Graphics largeGraphics = Graphics.FromImage(large)) { for(var i=0; i < largeNumber; i++) { //this is the bottleneck largeGraphics.DrawImage(bitmaps[i], locations[i].x , locations[i].y); } } var done = new MemoryStream(); large.Save(done, ImageFormat.Png); done.Position = 0; return (done); The DrawImage calls take a small 32bppPArgb bitmaps and copies them into a larger bitmap at locations that vary and the small bitmaps might only partially overlap the larger bitmaps visible area. Both images have semi transparent contents that get blended by DrawImage in a way that is important to the output. I've done some testing with BitBlt but not seen significant speed improvement and the alpha blending didn't come out the same in my tests. I'm open to just about any method including a better call to bitblt or unsafe c# code.

    Read the article

  • prevent javascript in the WMD editor's preview box

    - by Justin Grant
    There are many SO questions (e.g. here and here) about how to do server-side scrubbing of Markdown produced by the WMD editor to ensure the HTML generated doesn't contain malicious script, like this: <img onload="alert('haha');" src="http://www.google.com/intl/en_ALL/images/srpr/logo1w.png" /> Unfortunately, this still allows script to show up in the WMD client's preview box. I doubt this is a big deal since if you're scrubbing the HTML on the server, an attacker can't save the bad HTML so no one else will be able to see it later and have their cookies stolen or sessions hijacked by the bad script. But it's still kinda odd to allow an attacker to run any script in the context of your site, and it's probably a bad idea to allow the client preview window to allow different HTML than your server will allow. StackOverflow has clearly plugged this hole. How did they do it? [NOTE: I already figured this out but it required some tricky javascript debugging, so I'm answering my own question here to help others who may want to do ths same thing]

    Read the article

  • How to tell which dataform button ended edit when using EventToCommand

    - by Rodd
    I'm new to SilverLight and Mvvm-Light. I have a DataForm on my view that displays/edits a SelectedPerson property (a Person object) of my view model. I want to execute a command on my viewmodel when the user clicks the Save button but don't want to take action if the user clicks cancel. I added the following to my ViewModel: public RelayCommand PersonEditEnded {get; set;} ... public void Initialize() { PersonEditEnded = new RelayCommand(DoSomething); ... } public void DoSomething() { } I added the following to my View: <toolkit:DataForm x:Name="PersonForm" ... CurrentItem="{Binding SelectedPerson, Mode=TwoWay}"> <i:Interaction.Triggers> <i:EventTrigger EventName="EditEnded"> <gs:EventToCommand Command="{Binding PersonEditEnded, Mode=OneWay}"/> </i:EventTrigger> </i:Interaction.Triggers> </toolkit:DataForm> This works and the DoSomething method is being called when the user presses Submit. However, DoSomething is also called when user presses Cancel. Is there a way to know which button was pressed or to supress the call when Cancel is pressed? Thanks for whatever help you can offer!

    Read the article

  • Displaying tree path of record in SQL Server 2005

    - by jskiles1
    An example of my tree table is: ([id] is an identity) [id], [parent_id], [path] 1, NULL, 1 2, 1, 1-2 3, 1, 1-3 4, 3, 1-3-4 My goal is to query quickly for multiple rows of this table and view the full path of the node from its root, through its superiors, down to itself. The ultimate question is, should I generate this path on inserts and maintain it in its own column or generate this path on query to save disk space? I guess it depends if this table is write heavy or read heavy. I've been contemplating several approaches to using the "path" characteristic of this parent/child relationship and I just can't seem to settle on one. This "path" is simply for display purposes and serves absolutely no purpose other than that. Here is what I have done to implement this "path." AFTER INSERT TRIGGER - requires passing a NULL path to the insert and updating the path for the record at the inserted rows identity INSTEAD OF INSERT TRIGGER - does not require insert to have NULL path passed, but does require the trigger to insert with a NULL path and updating the path for the record at SCOPE_IDENTITY() STORED PROCEDURE - requiring all inserts into this table to be done through the stored procedure implementing the trigger logic VIEW - requires building the path in the view 1 and 2 seem annoying if massive amounts of data are entered at once. 3 seems annoying because all inserts must go through the procedure in order to have a valid path populated. 1, 2, and 3 require maintaining a path column on the table. 4 removes all the limitations of the above but require the view to perform the path logic and requires use of the view if a path is to be displayed. I have successfully implemented all of the above approaches and I'm mainly looking for some advice. Am I way off the mark here or are any of the above acceptable? Each has it's advantages and disadvantages.

    Read the article

  • Poor DB4O performance - What am I doing wrong?

    - by Jon
    I read Rob Conery's post about Object databases and thought I'd give it a go. I have a class lets say Book with 15 properties mostly string, some int, one Datetime and 2 decimals. I used Rob's code to open the database file etc to test on a Winforms project not a Web project. So I did the following: for(int i = 0; i < 1000000; i++) { var Book = new Book(); //Assign variables s.Save(Book); } s.CommitChanges(); This took at least a couple of minutes. I then tried to retrieve all data to test speed: var spp = s.All<Passports>(); This one line was quick, however, then doing: sppp.Count() as well as a seperate test doing a foreach loop and incrementing a counter took a couple of minutes. This seems quite slow to me. Am I doing something wrong or am I expecting too much?

    Read the article

  • Combining the streams:Web application

    - by Surendra J
    This question deals mainly with streams in web application in .net. In my webapplication I will display as follows: bottle.doc sheet.xls presentation.ppt stackof.jpg Button I will keep checkbox for each one to select. Suppose a user selected the four files and clicked the button,which I kept under. Then I instantiate clasees for each type of file to convert into pdf, which I wrote already and converted them into pdf and return them. My problem is the clases is able to read the data form URL and convert them into pdf. But I don't know how to return the streams and merge them. string url = @"url"; //Prepare the web page we will be asking for HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url); request.Method = "GET"; request.ContentType = "application/mspowerpoint"; request.UserAgent = "Mozilla/4.0+(compatible;+MSIE+5.01;+Windows+NT+5.0"; //Execute the request HttpWebResponse response = (HttpWebResponse)request.GetResponse(); //We will read data via the response stream Stream resStream = response.GetResponseStream(); //Write content into the MemoryStream BinaryReader resReader = new BinaryReader(resStream); MemoryStream PresentaionStream = new MemoryStream(resReader.ReadBytes((int)response.ContentLength)); //convert the presention stream into pdf and save it to local disk. But I would like to return the stream again. How can I achieve this any Ideas are welcome.

    Read the article

  • mvc post techniques

    - by user281180
    My form is as follows, how can I retrieve the values in my controller: <% using (Html.BeginForm("Create","Employee")) {% <%= Html.ValidationSummary()% <p><label for = "Name" >Name</label> <%= Html.TextBoxFor(model => model.EmployeeName)%> <%= Html.ValidationMessageFor(model => model.EmployeeName)%> </p> <label for = "ProName">Project</label> <table id="projectTable"> <tr> <td> <label for="Name" id = 1>UT1</label></td></tr> <tr> <td> <label for="Name" id= 2>UT2</label></td></tr> </table> <input type ="submit" value="Save" id="submit" /> <% }%>

    Read the article

  • How would you mask data returned in a Dynamic Data for Entities website?

    - by David Stratton
    I'm doing this in Visual Studio 2008, not 2010, in case there is a relevant difference between the two versions of the Dynamic Data websites. How would I mask data in the automatically generated tables in a Dynamic Data for Entities website? The scenario is we have one table where we want to allow users to ENTER sensitive data, but not VIEW sensitive data, so... (In the list below, I'm using "template" to mean "The web page generated automatically based on the schema and action. I'm sure that's the wrong terminology, but the meaning should be clear.) The "Insert" template should have the field's textbox available for the user to type a value in. The "Edit" template should have the field's textbox blanked out (empty string) regardless of what was in the field in the database in the first place, but the user should be able to type in new data and have it save The "View" template should either have the data for this field masked, or non-visible. The auto-generated table showing the list of records should also have this field masked or non-visible. I can do this easily with standard Web Forms, but I'm having a hard time figuring this out in the Dynamic Data site I'm working on. Masking data is such a common task, I have to believe Microsoft thought of this and provided a way to do it...

    Read the article

  • how to make the user add and delete in android

    - by user3678019
    i have 1 activity .. and in this activity i have 2 web view next to each other , i would like to add , ADD and Delete Button that can add one more web view next to the last web view , and the delete wish will delete any of the web view the user choose . and i want to make the user but it in the order he want like webview 1 first then webview 2 second how can i do this this is mu main.xml <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context="test.zezo.test.Main$PlaceholderFragment" > <HorizontalScrollView android:id="@+id/horizontalScrollView2" android:layout_width="match_parent" android:layout_height="match_parent" android:layout_alignParentLeft="true" android:layout_alignParentRight="true" android:layout_alignParentTop="true" > <LinearLayout android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="horizontal" > <WebView android:id="@+id/webView1" android:layout_width="350dp" android:layout_height="match_parent" android:layout_alignParentLeft="true" /> <WebView android:id="@+id/webView22" android:layout_width="350dp" android:layout_height="match_parent" android:layout_toRightOf="@+id/webView1" android:layout_alignParentLeft="true" /> </LinearLayout> </HorizontalScrollView> and this is a part of my main.java webView = (WebView) findViewById(R.id.webView1); String url = "http://google.com"; webView.getSettings().setJavaScriptEnabled(true); webView.loadUrl(url); webView.setWebChromeClient(new WebChromeClient()); webView.setWebViewClient(new WebViewClient()); WebView webView22 = (WebView)findViewById(R.id.webView22); webView22.getSettings().setJavaScriptEnabled(true); webView22.loadUrl("google.com); webView22.setWebChromeClient(new WebChromeClient()); webView22.setWebViewClient(new WebViewClient()); so how can i do the ADD and DELETE and re Order Buttons to it and one more thing it should be save so when he reopen the app it will be the same as after he add or delete or re order

    Read the article

  • Saving Abstract and Sub classes to database

    - by bretddog
    Hi, I have an abstract class "StrategyBase", and a set of sub classes, StrategyA/B/C etc. The sub classes use some of the properties of the base class, and have some individual properties. My question is how to save this to a database. I'm currently using SqlCE, and Linq-To-Sql by creating entity classes automatically with SqlMetal.exe. I've seen there are three solutions shown in this question, but I'm not able to see how these solutions will work or not with SqlMetal/entity classes. Though it seems to me the "concrete table inheritance" would probably work without any manual modifying. What about the other two, would they be problematic? For "Single Table Inheritance" wouldn't all classes get all variables, even though they don't need them? And for the "Class table inheritance" solution I can't really see at all how that will map into the entity-classes for a useful purpose. I may note that I extend these partial entity classes for making the classes of my business objects. I may also consider moving to EntityFramework instead of SqlMetal/Linq2Sql, so would be nice also to know if that makes any difference to what schema is easy to implement. One likely important thing to note is that I will constantly be develop new strategies, which makes me have to modify the program code, and probably the database shcema; when adding a new strategy. Sorry the question is a bit "all over the place", but hopefully it's some clear advantages/disadvantages here that you may be able to advice. ? Cheers!

    Read the article

  • Caching Authentication Data

    - by PartlyCloudy
    Hi, I'm currently implementing a REST web service using CouchDB and RESTlet. The RESTlet layer is mainly for authentication and some minor filtering of the JSON data served by CouchDB: Clients <= HTTP = [ RESTlet <= HTTP = CouchDB ] I'm using CouchDB also to store user login data, because I don't want to add an additional database server for that purpose. Thus, each request to my service causes two CouchDB requests conducted by RESTlet (auth data + "real" request). In order to keep the service as efficent as possible, I want to reduce the number of requests, in this case redundant requests for login data. My idea now is to provide a cache (i.e.LRU-Cache via LinkedHashMap) within my RESTlet application that caches login data, because HTTP caching will probabily not be enough. But how do I invalidate the cache data, once a user changes the password, for instance. Thanks to REST, the application might run on several servers in parallel, and I don't want to create a central instance just to cache login data. Currently, I save requested auth data in the cache and try to auth new requests by using them. If a authentication fails or there is now entry available, I'll dispatch a GET request to my CouchDB storage in order to obtain the actual auth data. So in a worst case, users that have changed their data will perhaps still be able to login with their old credentials. How can I deal with that? Or what is a good strategy to keep the cache(s) up-to-date in general? Thanks in advance.

    Read the article

  • How do you organise your MVC controller tests?

    - by Andrew Bullock
    I'm looking for tidy suggestions on how people organise their controller tests. For example, take the "add" functionality of my "Address" controller, [AcceptVerbs(HttpVerbs.Get)] public ActionResult Add() { var editAddress = new DTOEditAddress(); editAddress.Address = new Address(); editAddress.Countries = countryService.GetCountries(); return View("Add", editAddress); } [RequireRole(Role = Role.Write)] [AcceptVerbs(HttpVerbs.Post)] public ActionResult Add(FormCollection form) { // save code here } I might have a fixture called "when_adding_an_address", however there are two actions i need to test under this title... I don't want to call both actions in my Act() method in my fixture, so I divide the fixture in half, but then how do I name it? "When_adding_an_address_GET" and "When_adding_an_address_POST"? things just seems to be getting messy, quickly. Also, how do you deal with stateless/setupless assertions for controllers, and how do you arrange these wrt the above? for example: [Test] public void the_requesting_user_must_have_write_permissions_to_POST() { Assert.IsTrue(this.SubjectUnderTest.ActionIsProtectedByRole(c => c.Add(null), Role.Write)); } This is custom code i know, but you should get the idea, it simply checks that a filter attribute is present on the method. The point is it doesnt require any Arrange() or Act(). Any tips welcome! Thanks

    Read the article

  • ASP.NET MVC WAP, SharePoint Designer and SVN

    - by David Lively
    All, I'm starting a new ASP.NET MVC project which requires some content management capabilities. The people who will be managing the content prefer to use SharePoint Designer (successor to FrontPage) to modify content. I'd like to allow them to keep doing that. The issues are: Since I'd like this to be a WAP, not a website project, how can I allow them to see their changes in action without requiring them to have Visual Studio on their local machines? Can I specify a "default" action for a controller so that given a url like /products/new_view_here Can I let them save pages (views) and see them in the browser without having to go through the check-in/build/deploy process? I'd like their changes to be stored in SVN; SharePoint designer seems to only support Visual SourceSafe (ugh) directly. The ideas I've come up with so far are Write an HTTP handler that implements the FrontPage Server Extensions protocol. This sounds time consuming, but I haven't yet looked at the protocol spec. However, it would allow me to perform whatever operations I want on the server side, including checking files into SVN. Ditch the WAP in favor of a website project. I do not like having the source present on the server, however. Also, will MVC work in a website project? Surely someone has tackled this problem before?

    Read the article

  • NULL-keys for key/value table

    - by user72185
    (Using Oracle) I have a table with key/value pairs like this: create table MESSAGE_INDEX ( KEY VARCHAR2(256) not null, VALUE VARCHAR2(4000) not null, MESSAGE_ID NUMBER not null ) I now want to find all the messages where key = 'someKey' and value is 'val1', 'val2' or 'val3' - OR value is null in which case there will be no entry in the table at all. This is to save space; there would be a large number of keys with null values if I stored them all. I think this works: SELECT message_id FROM message_index idx WHERE ((key = 'someKey' AND value IN ('val1', 'val2', 'val3')) OR NOT EXISTS (SELECT 1 FROM message_index WHERE key = 'someKey' AND idx.message_id = message_id)) But is is extremely slow. Takes 8 seconds with 700K records in message_index and there will be many more records and more search criteria when moving outside of my test environment. Primary key is key, value, message_id: add constraint PK_KEY_VALUE primary key (KEY, VALUE, MESSAGE_ID) And I added another index for message_id, to speed up searching for missing keys: create index IDX_MESSAGE_ID on MESSAGE_INDEX (MESSAGE_ID) I will be doing several of these key/value lookups in every search, not just one as shown above. So far I am doing them nested, where output id's of one level is the input to the next. E.g.: SELECT message_id from message_index WHERE (key/value compare) AND message_id IN ( SELECT ... and so on ) What can I do to speed this up?

    Read the article

  • Groovy / Scala / Java under the hood

    - by Jack
    I used Java for like 6-7 years, then some months ago I discovered Groovy and started to save a lot of typing.. then I wondered how certain things worked under the hood (because groovy performance is really poor) and understood that to give you dynamic typing every Groovy object is a MetaClass object that handles all the things that the JVM couldn't handle by itself. Of course this introduces a layer in the middle between what you write and what you execute that slows down everything. Then somedays ago I started getting some infos about Scala. How these two languages compare in their byte code translations? How much things they add to the normal structure that it would be obtained by plain Java code? I mean, Scala is static typed so wrapper of Java classes should be lighter, since many things are checked during compile time but I'm not sure about the real differences of what's going inside. (I'm not talking about the functional aspect of Scala compared to the other ones, that's a different thing) Can someone enlighten me? From WizardOfOdds it seems like that the only way to get less typing and same performance would be to write an intermediate translator that translates something in Java code (letting javac compile it) without alterating how things are executed, just adding synctatic sugar withour caring about other fallbacks of the language itself.

    Read the article

  • asp.net: saving js file with c# commands...

    - by ile
    <head runat="server"> <title><asp:ContentPlaceHolder ID="TitleContent" runat="server" /></title> <link href="../../Content/css/layout.css" rel="stylesheet" type="text/css" /> <script type="text/javascript" src="/Areas/CMS/Content/js/jquery-1.3.2.min.js"></script> <script type="text/javascript" src="/Areas/CMS/Content/js/jquery.jeditable.js"></script> <script type="text/javascript" src="/Areas/CMS/Content/js/jeditable.js"></script> <script type="text/javascript"> $(document).ready(function() { $(".naslov_vijesti").editable('<%=Url.Action("UpdateSettings","Article") %>', { submit: 'ok', submitdata: {field: "Title"}, cancel: 'cancel', cssclass: 'editable', width: '99%', placeholder: 'emtpy', indicator: "<img src='../../Content/img/indicator.gif'/>" }); }); </script> </head> This is head tag of site.master file. I would like to remove this multiline part from head and place it in jeditable.js file, which is now empty. If I do copy/paste, then <% %> part won't be executed. In PHP I would save js file as jeditable.js.php and server would compile code that is in <?php ?> tag. Any ideas how to solve this problem? Thanks in advance, Ile

    Read the article

  • ruby on rails-Problem with the selection form helper

    - by winter sun
    Hello I have a form in witch users can add their working hours view them and edit them (All in one page). When adding working hours the user must select a project from a dropdown list. In case the action is adding a new hour record the dropdown field should remain empty (not selected) in case the action is edit the dropdown field should be selected with the appropriate value. In order to overcome this challenge I wrote the following code <% if params[:id].blank?%> <select name="hour[project_id]" id="hour_project_id"> <option value="nil">Select Project</option> <% @projects.each do|project|%> <option value="<%=project.id %>"><%=project.name%></option> <% end%> </select> <% else %> <%= select('hour','project_id', @projects.collect{|project|[project.name,project.id]},{:prompt => 'Select Project'})%> <% end %> So in case of save action I did the dropdown list only with html, and in case of edit action I did it with the collect method. It works fine until I tried to code the errors. The problem is that when I use the error method: validates_presence_of :project_id it didn't recognize it in the html form of the dropdown list and don’t display the error message (its working only for the dropdown with the collect method). I will deeply appreciate your instructions and help in this matter

    Read the article

  • accepts_nested_attributes with Model.update for multiple models

    - by Ohad
    Hi, I'm trying to follow http://railscasts.com/episodes/198-edit-multiple-individually but I would like to save objects which are nested (accepts_nested_attributes_for). I've added the following in my controller: def edit_multiple @people = Person.find(params[:person_ids], :include => [:parameters]) end def update_multiple keys = params[:people].keys if keys.empty? flash[:error] = "Please select at least one person" redirect_to :back and return end values = keys.map {|k| params[:people][k]} @people = Person.update(keys,values).reject { |h| h.errors.empty? } if @people.empty? flash[:notice] = 'Updated people!' redirect_to person_path else redirect_to edit_multiple_path end end and in the view: <% form_tag update_multiple_people_path, :method => :post do %> <% for person in @people %> <% fields_for "people[]", host do |f| %> <%= f.error_messages :object_name => "person" %> <h3><%= h person.name %></h3> <% for parameter in person.parameters %> <% f.fields_for "person_parameters[]", parameter do |builder| -%> <%= render "common/parameters", :f => builder %> <% end -%> <% end -%> <p><%= link_to_add_fields "Add a parameter", f, :person_parameters, "common/parameters" %></p> <% end %> <% end %> <p><%= submit_tag "Edit these Parameter(s)" %></p> <% end %> but I'm always getting a mistmatch - e.g. ActiveRecord::AssociationTypeMismatch and Parameter(#70341811965140) expected, got Array(#70341874300460) Thanks!

    Read the article

  • A question about writing a background/automatic/silent downloader/installer for an app in C#.

    - by Mike Webb
    Background: I have a main application that needs to be able to go to the web and download DLL files associated with it (ones that we write, located on our server). It really needs to be able to download these DLL files to the application folder in "C:\Program Files\". In the past I have used System.Net.WebClient to download whatever files I wanted from the web. The Issue I have had a lot of trouble downloading data in the past and saving to files on a user's hard drive. I get many reports of users saying that this does not work and it is generally because of user rights issues in the program. In the cases where it was an issue with program user rights every user could go to the exact file location on the web, download it, and then save it to the right place manually. I want this to work like all the other programs I have seen download/install in this fassion (i.e. Firefox Pluign Updates, Flash Player, JAVA, Adobe Reader, etc). All of these work without a hitch. The Question Is there some code I need to use to give my downloader program special rights to the Program Files folder? Can I even do this? Is there a better class or library that I should use? Is there a different approach to downloading files I should take, such as using threads or something else to download data? Any help here is appreciated. I want to try to stay away from third-party apps/libraries if at all possible, other than Microsoft of course, due to licensing issues, but still send any suggestions my way. Again, other programs seem to have the rights issues and download capability figured out. I want this same capability.

    Read the article

  • Access/Download server files, not in site root, with PHP

    - by user271619
    Usually I save documents (images, mpegs, excel, word docs, etc...) for my friends or family on my website's root, inside a directory called /files/ or something similar. Nothing too uncommon. But, I have been playing with user session control, and allowing users to upload files to the dedicated /files/ directory. (the file names are saved in a db, with that user's ID) But, that means other people could try to guess and locate other people's files. I do randomize the file names, upon upload. And I stop the apache from displaying the /files/ directory content. However, I'd like to start saving the files outside of the website's root. This way it can't be accessible via the browser. I don't have any code to show, but I didn't want to even start on this endeavor if it's not able to be accomplished. I did find this snippet that shows how to display an image, from outside your website root: $file = $_GET['file']; $fileDir = '/path/to/files/'; if (file_exists($fileDir . $file)) { // Note: You should probably do some more checks // on the filetype, size, etc. $contents = file_get_contents($fileDir . $file); // Note: You should probably implement some kind // of check on filetype header('Content-type: image/jpeg'); echo $contents; } ? Maybe I can use this for any file type, but has anyone heard of a better way to allow users (logged in) to access their files from online, but not letting other users has similar access?

    Read the article

< Previous Page | 382 383 384 385 386 387 388 389 390 391 392 393  | Next Page >