Search Results

Search found 11954 results on 479 pages for 'gets'.

Page 433/479 | < Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >

  • Why does the rename() syscall prohibit moving a directory that I can't write to a different director

    - by Daniel Papasian
    I am trying to understand why this design decision was made with the rename() syscall in 4.2BSD. There's nothing I'm trying to solve here, just understand the rationale for the behavior itself. 4.2BSD saw the introduction of the rename() syscall for the purpose of allowing atomic renames/moves of files. From 4.3BSD-Reno/src/sys/ufs/ufs_vnops.c: /* * If ".." must be changed (ie the directory gets a new * parent) then the source directory must not be in the * directory heirarchy above the target, as this would * orphan everything below the source directory. Also * the user must have write permission in the source so * as to be able to change "..". We must repeat the call * to namei, as the parent directory is unlocked by the * call to checkpath(). */ if (oldparent != dp->i_number) newparent = dp->i_number; if (doingdirectory && newparent) { VOP_LOCK(fndp->ni_vp); error = ufs_access(fndp->ni_vp, VWRITE, tndp->ni_cred); VOP_UNLOCK(fndp->ni_vp); So clearly this check was added intentionally. My question is - why? Is this behavior supposed to be intuitive? The effect of this is that one cannot move a directory (located in a directory that one can write) that one cannot write to another directory that one can write to atomically. You can, however, create a new directory, move the links over (assuming one has read access to the directory), and then remove one's write bit on the directory. You just can't do so atomically. % cd /tmp % mkdir stackoverflow-question % cd stackoverflow-question % mkdir directory-1 % mkdir directory-2 % mkdir directory-1/directory-i-cant-write % echo "foo" > directory-1/directory-i-cant-write/contents % chmod 000 directory-1/directory-i-cant-write/contents % chmod 000 directory-1/directory-i-cant-write % mv directory-1/directory-i-cant-write directory-2 mv: rename directory-1/directory-i-cant-write to directory-2/directory-i-cant-write: Permission denied We now have a directory I can't write with contents I can't read that I can't move atomically. I can, however, achieve the same effect non-atomically by changing permissions, making the new directory, using ln to create the new links, and changing permissions. (Left as an exercise to the reader) . and .. are special cased already, so I don't particularly buy that it is intuitive that if I can't write a directory I can't "change .." which is what the source suggests. Is there any reason for this besides it being the perceived correct behavior by the author of the code? Is there anything bad that can happen if we let people atomically move directories (that they can't write) between directories that they can write?

    Read the article

  • Menu Control in Master Page fails to use CSS styles

    - by Shaun
    I'm working on a web application that uses ASP.NET 3.5 and C#. Structurally, I have a master page with a menu control on it. The control serves as my navigation, and it gets its items from a SiteMapDataSource control and a corresponding Web.sitemap file. The problem is that some styles do not render properly when you specify the CssClass property. More specifically, the selected and hover styles don't respond to css styles. Consider the code below: <%@ Master Language="C#" AutoEventWireup="true" CodeFile="Site.master.cs" Inherits="Site" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.or/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>A webpage</title> </head> <body> <form id="form1" runat="server"> <div id="page"> <asp:Menu ID="navMenu" Orientation="Horizontal" StaticMenuStyle-CssClass="staticMenu" StaticMenuItemStyle-CssClass="staticMenuItem" StaticSelectedStyle-CssClass="staticSelectedItem" StaticHoverStyle-CssClass="staticHoverItem" runat="server"> </asp:Menu> <asp:SiteMapDataSource ID="srcSiteMap" runat="server" ShowStartingNode="false" /> <br /> <asp:ContentPlaceHolder id="ContentPlaceHolder1" runat="server"> </asp:ContentPlaceHolder> </div> </form> </body> </html> Suppose I had a corresponding .css file with the following: .staticMenuItem { background-color:Red; } .staticSelectedItem { background-color:Green; } .staticHoverItem { background-color:Blue; } What will happen is that my item backgrounds will properly be red, but my selected item will not be green and the item I'm hovering my mouse over will not be blue. This seems true regardless of whether or not I include the style in the head of the master page or in an external file in default theme as specified in the web.config file. If I specify the styles in the asp.net xml like so: <asp:Menu ID="navMenu" Orientation="Horizontal" runat="server"> <StaticSelectedStyle BackColor="Green" Font-Underline="True" Font-Bold="True" /> <StaticHoverStyle BackColor="Gray" /> </asp:Menu> It appears to work properly in Firefox, but the style is never embedded in the html in Internet Explorer. Odd. Does anybody have any insight into what is causing this problem and how to neatly work around it? I'm aware I might be able to programmically determine the current page and select the corresponding menu item manually so it receives the proper style class, but before I resort to hacking C# and Javascript together to fix this functionality, I'm open to ideas.

    Read the article

  • Wicket and Spring Intergation

    - by Vinothbabu
    I have a wicket contact form, and i receive the form object. Now i need to pass this object to Spring Service. package com.mysticcoders.mysticpaste.web.pages; import org.apache.wicket.markup.html.WebPage; import org.apache.wicket.markup.html.form.Form; import org.apache.wicket.markup.html.form.TextField; import org.apache.wicket.markup.html.panel.FeedbackPanel; import com.mysticcoders.mysticpaste.model.Contact; import org.apache.wicket.model.CompoundPropertyModel; import com.mysticcoders.mysticpaste.services.IContact; public class FormPage extends WebPage { private Contact contact; private IContact icontact; public FormPage() { // Add a FeedbackPanel for displaying our messages FeedbackPanel feedbackPanel = new FeedbackPanel("feedback"); add(feedbackPanel); Form<Object> form = new Form<Object>("contactForm", new CompoundPropertyModel<Object>(contact)) { private static final long serialVersionUID = 1L; protected void onSubmit(Contact contact) { icontact.saveContact(contact); } }; form.add(new TextField<Object>("name")); form.add(new TextField<Object>("email")); form.add(new TextField<Object>("country")); form.add(new TextField<Object>("age")); add(form); // add a simple text field that uses Input's 'text' property. Nothing // can go wrong here } } I am pretty much sure that we need to do something with application-context xml where i may need to wire out. My Application-context.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:tx="http://www.springframework.org/schema/tx" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.5.xsd"> <bean id="WicketApplication" class="com.mysticcoders.mysticpaste.web.pages.WicketApplication" /> </beans> My Question is simple. What should i do which can make my onSubmit method call the Spring Service? Could someone let me know what needs to modified in my Application-context.xml so that once the form gets submitted, it contacts the Spring Service class.

    Read the article

  • Replace click() with document.ready() in jquery....

    - by bala3569
    I downloaded jquery effects example and all effects are appearing only onclick but i want it to be executed on document.ready() and continue... <script type="text/javascript"> var ImgIdx = 2;//To mark which image will be select next function PreloadImg(){ $.ImagePreload("images/im2.jpg"); $.ImagePreload("images/im3.jpg"); $.ImagePreload("images/im4.jpg"); $.ImagePreload("images/im5.jpg"); } $(document).ready(function(){ PreloadImg(); $(".SlashEff ul li").click(function(){ $(".Slash").ImageSwitch({Type:$(this).attr("rel"), NewImage:"images/im"+ImgIdx+".jpg", speed: 4000 }); ImgIdx++; if(ImgIdx>5) ImgIdx = 1; }); }); </script> and my <div class="SlashEff"> <ul> <li class="TryFadeIn" rel="FadeIn">Fade in</li> <li class="TryFlyIn" rel="FlyIn">Fly in</li> <li class="TryFlyOut" rel="FlyOut">Fly out</li> <li class="TryFlipIn" rel="FlipIn">Flip in</li> <li class="TryFlipOut" rel="FlipOut">Flip out</li> <li class="TryScroll" rel="ScrollIn">Scroll in</li> <li class="TryScroll" rel="ScrollOut">Scroll out</li> <li class="TrySingleDoor" rel="SingleDoor">Single Door</li> <li class="TryDoubleDoor" rel="DoubleDoor">Double Door</li> </ul> </div> Here is the link http://www.hieu.co.uk/blog/index.php/imageswitch/ I tried this, $(document).ready(function(){ PreloadImg(); $(".Slash").ImageSwitch({Type:$(this).attr("rel"), NewImage:"images/im"+ImgIdx+".jpg", speed: 4000 }); ImgIdx++; if(ImgIdx>5) ImgIdx = 1; }); I tried this but it gets executed only once.... I want to execute this every 5000ms... Is this possible...

    Read the article

  • Unique_ptr compiler errors

    - by Godric Seer
    I am designing and entity-component system for a project, and C++ memory management is giving me a few issues. I just want to make sure my design is legitimate. So to start I have an Entity class which stores a vector of Components: class Entity { private: std::vector<std::unique_ptr<Component> > components; public: Entity() { }; void AddComponent(Component* component) { this -> components.push_back(std::unique_ptr<Component>(component)); } ~Entity(); }; Which if I am not mistaken means that when the destructor is called (even the default, compiler created one), the destructor for the Entity, will call ~components, which will call ~std::unique_ptr for each element in the vector, and lead to the destruction of each Component, which is what I want. The component class has virtual methods, but the important part is its constructor: Component::Component(Entity parent) { parent.addComponent(this) // I am not sure if this would work like I expect // Other things here } As long as passing this to the method works, this also does what I want. My confusion is in the factory. What I want to do is something along the lines of: std::shared_ptr<Entity> createEntity() { std::shared_ptr<Entity> entityPtr(new Entity()); new Component(*parent); // Initialize more, and other types of Components return entityPtr; } Now, I believe that this setup will leave the ownership of the Component in the hands of its Parent Entity, which is what I want. First a small question, do I need to pass the entity into the Component constructor by reference or pointer or something? If I understand C++, it would pass by value, which means it gets copied, and the copied entity would die at the end of the constructor. The second, and main question is that code based on this sample will not compile. The complete error is too large to print here, however I think I know somewhat of what is going on. The compiler's error says I can't delete an incomplete type. My Component class has a purely virtual destructor with an implementation: inline Component::~Component() { }; at the end of the header. However since the whole point is that Component is actually an interface. I know from here that a complete type is required for unique_ptr destruction. The question is, how do I work around this? For reference I am using gcc 4.4.6.

    Read the article

  • Stumbleupon type query...

    - by Chris Denman
    Wow, makes your head spin! I am about to start a project, and although my mySql is OK, I can't get my head around what required for this: I have a table of web addresses. id,url 1,http://www.url1.com 2,http://www.url2.com 3,http://www.url3.com 4,http://www.url4.com I have a table of users. id,name 1,fred bloggs 2,john bloggs 3,amy bloggs I have a table of categories. id,name 1,science 2,tech 3,adult 4,stackoverflow I have a table of categories the user likes as numerical ref relating to the category unique ref. For example: user,category 1,4 1,6 1,7 1,10 2,3 2,4 3,5 . . . I have a table of scores relating to each website address. When a user visits one of these sites and says they like it, it's stored like so: url_ref,category 4,2 4,3 4,6 4,2 4,3 5,2 5,3 . . . So based on the above data, URL 4 would score (in it's own right) as follows: 2=2 3=2 6=1 What I was hoping to do was pick out a random URL from over 2,000,000 records based on the current users interests. So if the logged in user likes categories 1,2,3 then I would like to ORDER BY a score generated based on their interest. If the logged in user likes categories 2 3 and 6 then the total score would be 5. However, if the current logged in user only like categories 2 and 6, the URL score would be 3. So the order by would be in context of the logged in users interests. Think of stumbleupon. I was thinking of using a set of VIEWS to help with sub queries. I'm guessing that all 2,000,000 records will need to be looked at and based on the id of the url it will look to see what scores it has based on each selected category of the current user. So we need to know the user ID and this gets passed into the query as a constant from the start. Ain't got a clue! Chris Denman

    Read the article

  • Error trying to run rails server

    - by David87
    I am trying to get a basic Rails application to run on my Mac OS X 10.6.5. I created a new app called demo (rails new demo), then went into the demo directory and tried to start the app with rails server. Here is the error message I received: "/Users/dpetrovi/.gem/ruby/1.8/gems/sqlite3-ruby-1.3.2/lib/sqlite3/sqlite3_native.bundle: [BUG] Segmentation fault ruby 1.8.7 (2010-12-23 patchlevel 330) [i686-darwin10] Abort trap" I checked bundle install in the demo folder: "Using rake (0.8.7) Using abstract (1.0.0) Using activesupport (3.0.3) Using builder (2.1.2) Using i18n (0.5.0) Using activemodel (3.0.3) Using erubis (2.6.6) Using rack (1.2.1) Using rack-mount (0.6.13) Using rack-test (0.5.6) Using tzinfo (0.3.23) Using actionpack (3.0.3) Using mime-types (1.16) Using polyglot (0.3.1) Using treetop (1.4.9) Using mail (2.2.13) Using actionmailer (3.0.3) Using arel (2.0.6) Using activerecord (3.0.3) Using activeresource (3.0.3) Using bundler (1.0.7) Using thor (0.14.6) Using railties (3.0.3) Using rails (3.0.3) Using sqlite3-ruby (1.3.2) Your bundle is complete! Use bundle show [gemname] to see where a bundled gem is installed." Ruby, RubyGems, and sqlite3 were installed using MacPorts. Then I used gem to try to install the sqlite3-ruby interface. (sudo gem install sqlite3-ruby). Here is where I first noticed something could be off: "Successfully installed sqlite3-ruby-1.3.2 1 gem installed Installing ri documentation for sqlite3-ruby-1.3.2... No definition for libversion Enclosing class/module 'mSqlite3' for class Statement not known Installing RDoc documentation for sqlite3-ruby-1.3.2... No definition for libversion Enclosing class/module 'mSqlite3' for class Statement not known " I had rails running well on my system a few months ago, so I figured maybe I had some duplicates and it was trying to use the wrong one. I ran: "for cmd in ruby irb gem rake; do which $cmd; done" and got: "/opt/local/bin/ruby /opt/local/bin/irb /opt/local/bin/gem /opt/local/bin/rake" Checking where sqlite3 also gets me: "/opt/local/bin/sqlite3" so they all seem to be in the right place. Obviously /opt/local/bin is in my system path. If I check gems server, it shows that I have installed sqlite3-ruby 1.3.2 gem. Not sure what the problem could be? I am using ruby 1.8.7 (2010-12-23 patchlevel 330) [i686-darwin10]. Macports claims this is the latest (although ive seen 1.9.1) One more thing-- in irb, I tried to check which version of sqlite3 my sqlite3-ruby is bound to, but I can only get this far: ":irb(main):001:0 require 'rubygems' = true irb(main):002:0 require 'sqlite3' /Users/dpetrovi/.gem/ruby/1.8/gems/sqlite3-ruby-1.3.2/lib/sqlite3/sqlite3_native.bundle: [BUG] Segmentation fault ruby 1.8.7 (2010-12-23 patchlevel 330) [i686-darwin10] Abort trap" Any suggestions? Im hoping I overlooked something obvious. Thanks

    Read the article

  • How to configure the framesize using AudioUnit.framework on iOS

    - by Piperoman
    I have an audio app i need to capture mic samples to encode into mp3 with ffmpeg First configure the audio: /** * We need to specifie our format on which we want to work. * We use Linear PCM cause its uncompressed and we work on raw data. * for more informations check. * * We want 16 bits, 2 bytes (short bytes) per packet/frames at 8khz */ AudioStreamBasicDescription audioFormat; audioFormat.mSampleRate = SAMPLE_RATE; audioFormat.mFormatID = kAudioFormatLinearPCM; audioFormat.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger; audioFormat.mFramesPerPacket = 1; audioFormat.mChannelsPerFrame = 1; audioFormat.mBitsPerChannel = audioFormat.mChannelsPerFrame*sizeof(SInt16)*8; audioFormat.mBytesPerPacket = audioFormat.mChannelsPerFrame*sizeof(SInt16); audioFormat.mBytesPerFrame = audioFormat.mChannelsPerFrame*sizeof(SInt16); The recording callback is: static OSStatus recordingCallback(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) { NSLog(@"Log record: %lu", inBusNumber); NSLog(@"Log record: %lu", inNumberFrames); NSLog(@"Log record: %lu", (UInt32)inTimeStamp); // the data gets rendered here AudioBuffer buffer; // a variable where we check the status OSStatus status; /** This is the reference to the object who owns the callback. */ AudioProcessor *audioProcessor = (__bridge AudioProcessor*) inRefCon; /** on this point we define the number of channels, which is mono for the iphone. the number of frames is usally 512 or 1024. */ buffer.mDataByteSize = inNumberFrames * sizeof(SInt16); // sample size buffer.mNumberChannels = 1; // one channel buffer.mData = malloc( inNumberFrames * sizeof(SInt16) ); // buffer size // we put our buffer into a bufferlist array for rendering AudioBufferList bufferList; bufferList.mNumberBuffers = 1; bufferList.mBuffers[0] = buffer; // render input and check for error status = AudioUnitRender([audioProcessor audioUnit], ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList); [audioProcessor hasError:status:__FILE__:__LINE__]; // process the bufferlist in the audio processor [audioProcessor processBuffer:&bufferList]; // clean up the buffer free(bufferList.mBuffers[0].mData); //NSLog(@"RECORD"); return noErr; } With data: inBusNumber = 1 inNumberFrames = 1024 inTimeStamp = 80444304 // All the time same inTimeStamp, this is strange However, the framesize that i need to encode mp3 is 1152. How can i configure it? If i do buffering, that implies a delay, but i would like to avoid this because is a real time app. If i use this configuration, each buffer i get trash trailing samples, 1152 - 1024 = 128 bad samples. All samples are SInt16.

    Read the article

  • Optimizing jQuery for Tabs

    - by jpdbaugh
    I am in the process of developing a widget. The widget has three tabs that are implemented in the following way. <div id="widget"> <ul id="tabs"> <li><a href="http...">One</a></li> <li><a href="http...">Two</a></li> <li><a href="http...">Three</a></li> </ul> <div id="tab_container"> <div id="tab_content"> //Tab Content goes here... </div> </div> </div> // The active class is initialized when the document loads $("#tabs li a").click(function() { $("#tabs li.active").removeClass("active"); $("#tab_content").load($(this).attr('href')); $(this).parent().addClass("active"); return false; }); The problem I am having is that the jquery code that have written is very slow. If the user changes tabs quickly the widget gets behing and bogged down. This causes the tabs to to not align with the data being displayed and just general lag. I believe that this is because the tab is being changed before $.load() is finished. I have tried to implement the following: ("#tabs li a").click(function() { $("#tabs li.active").removeClass("active"); $("#tab_content").load($(this).attr('href'), function (){ $(this).parent().addClass("active"); }); return false; }); It is my understanding that the callback function within in the load function does not execute until the load function is completed. I think this would solve my problem, however I can not come up with a way to select the correct tab that was clicked within the callback function. If this is not the way to do this then what is the best way implement these tabs so that they would stop loading an old request and load the newest tab selection by the user? Thanks

    Read the article

  • COMException Problem

    - by Jack Harvin
    Wondering if anyone could help with my problem. Below is the code, and after the code an explination of where the exception is thrown. using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using System.Web; using WatiN.Core; using System.Threading; using System.IO; namespace WindowsFormsApplication1 { public partial class Form1 : System.Windows.Forms.Form { public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { } private void button1_Click(object sender, EventArgs e) { Thread t = new Thread(createApplications); Settings.AutoStartDialogWatcher = false; t.SetApartmentState(System.Threading.ApartmentState.STA); t.Start(); } private void createApplications() { createApp("username", "password", "Test App", "This is just a test description", "http:/mysite.com"); } private void createApp(String username, String password, String appName, String description, String appUrl) { var currentBrowser = new IE("http://mysite.com/login/php"); currentBrowser.TextField(Find.ById("username")).TypeText(username); currentBrowser.TextField(Find.ById("password")).TypeText(password); currentBrowser.Button(Find.ById("submit")).Click(); currentBrowser.GoTo("http://mysite.com/createmusicapp.php"); currentBrowser.TextField(Find.ById("application_name")).TypeText(appName); currentBrowser.TextField(Find.ById("application_description")).TypeText(description); currentBrowser.TextField(Find.ById("application_url")).TypeText(appUrl); currentBrowser.RadioButton(Find.ById("client_application_desktop_1")).Click(); currentBrowser.RadioButton(Find.ById("client_application_is_writable_1")).Click(); WatiN.Core.Image captchaImage = currentBrowser.Div(Find.ById("recaptcha_image")).Image(Find.ByStyle("display", "block")); Form2 captcha = new Form2(captchaImage.Src); captcha.ShowDialog(); } } } The exception is thrown on this line: currentBrowser.TextField(Find.ById("username")).TypeText(username); BUT, it's thrown when it gets to this line: captcha.ShowDialog(); It logs in, and fills in the app details and Form2 loads fine, but once loaded, after around 2-3 seconds the exception happens. I am wondering if it's anything to do with the threads? But I wouldn't know how to solve it if it was. The complete exception thrown is: The object invoked has disconnected from its clients. (Exception from HRESULT: 0x80010108 (RPC_E_DISCONNECTED))

    Read the article

  • How can I await the first completed async task of a list in .Net?

    - by Eyal
    My input is a long list of files located on an Amazon S3 server. I'd like to download the metadata of the files, compute the hashes of the local files, and compare the metadata hash with the local files' hash. Currently, I use a loop to start all the metadata downloads asynchronously, then as each completes, compute MD5 on the local file if needed and compare. Here's the code (just the relevant lines): Dim s3client As New AmazonS3Client(KeyId.Text, keySecret.Text) Dim responseTasks As New List(Of System.Tuple(Of ListViewItem, Task(Of GetObjectMetadataResponse))) For Each lvi As ListViewItem In lvStatus.Items Dim gomr As New Amazon.S3.Model.GetObjectMetadataRequest gomr.BucketName = S3FileDialog.GetBucketName(lvi.SubItems(2).Text) gomr.Key = S3FileDialog.GetPrefix(lvi.SubItems(2).Text) responseTasks.Add(New System.Tuple(Of ListViewItem, Task(Of GetObjectMetadataResponse))(lvi, s3client.GetObjectMetadataAsync(gomr))) Next For Each t As System.Tuple(Of ListViewItem, Task(Of GetObjectMetadataResponse)) In responseTasks Dim response As GetObjectMetadataResponse = Await t.Item2 If response.ETag.Trim(""""c) = MD5CalcFile(lvi.SubItems(1).Text) Then lvi.SubItems(3).Text = "Match" UpdateLvi(lvi) End If Next I've got two problems: I'm awaiting the reponses in the order that I made them. I'd rather process them in the order that they complete so that I get them faster. The MD5 calculation is long and synchronous. I tried making it async but the process locked up. I think that the MD5 task was added to the end of .Net's task list and it didn't get to run until all the downloads completed. Ideally, I process the response as they arrive, not in order, and the MD5 is asynchronous but gets a chance to run. Edit: Incorporating WhenAll, it looks like this now: Dim s3client As New Amazon.S3.AmazonS3Client(KeyId.Text, keySecret.Text) Dim responseTasks As New Dictionary(Of Task(Of GetObjectMetadataResponse), ListViewItem) For Each lvi As ListViewItem In lvStatus.Items Dim gomr As New Amazon.S3.Model.GetObjectMetadataRequest gomr.BucketName = S3FileDialog.GetBucketName(lvi.SubItems(2).Text) gomr.Key = S3FileDialog.GetPrefix(lvi.SubItems(2).Text) responseTasks.Add(s3client.GetObjectMetadataAsync(gomr), lvi) Next Dim startTime As DateTimeOffset = DateTimeOffset.Now Do While responseTasks.Count > 0 Dim currentTask As Task(Of GetObjectMetadataResponse) = Await Task.WhenAny(responseTasks.Keys) Dim response As GetObjectMetadataResponse = Await currentTask If response.ETag.Trim(""""c) = MD5CalcFile(lvi.SubItems(1).Text) Then lvi.SubItems(3).Text = "Match" UpdateLvi(lvi) End If Loop MsgBox((DateTimeOffset.Now - startTime).ToString) The UI locks up momentarily whenever MDSCalcFile is done. The whole loop takes about 45s and the first file's MD5 result happens within 1s of starting. If I change the line to: If response.ETag.Trim(""""c) = Await Task.Run(Function () MD5CalcFile(lvi.SubItems(1).Text)) Then The UI doesn't lock up when MD5CalcFile is done. The whole loop takes about 75s, up from 45s, and the first file's MD5 result happens after 40s of waiting.

    Read the article

  • Should we denormalize database to improve performance?

    - by Groo
    We have a requirement to store 500 measurements per second, coming from several devices. Each measurement consists of a timestamp, a quantity type, and several vector values. Right now there is 8 vector values per measurement, and we may consider this number to be constant for needs of our prototype project. We are using HNibernate. Tests are done in SQLite (disk file db, not in-memory), but production will probably be MsSQL. Our Measurement entity class is the one that holds a single measurement, and looks like this: public class Measurement { public virtual Guid Id { get; private set; } public virtual Device Device { get; private set; } public virtual Timestamp Timestamp { get; private set; } public virtual IList<VectorValue> Vectors { get; private set; } } Vector values are stored in a separate table, so that each of them references its parent measurement through a foreign key. We have done a couple of things to ensure that generated SQL is (reasonably) efficient: we are using Guid.Comb for generating IDs, we are flushing around 500 items in a single transaction, ADO.Net batch size is set to 100 (I think SQLIte does not support batch updates? But it might be useful later). The problem Right now we can insert 150-200 measurements per second (which is not fast enough, although this is SQLite we are talking about). Looking at the generated SQL, we can see that in a single transaction we insert (as expected): 1 timestamp 1 measurement 8 vector values which means that we are actually doing 10x more single table inserts: 1500-2000 per second. If we placed everything (all 8 vector values and the timestamp) into the measurement table (adding 9 dedicated columns), it seems that we could increase our insert speed up to 10 times. Switching to SQL server will improve performance, but we would like to know if there might be a way to avoid unnecessary performance costs related to the way database is organized right now. [Edit] With in-memory SQLite I get around 350 items/sec (3500 single table inserts), which I believe is about as good as it gets with NHibernate (taking this post for reference: http://ayende.com/Blog/archive/2009/08/22/nhibernate-perf-tricks.aspx). But I might as well switch to SQL server and stop assuming things, right? I will update my post as soon as I test it.

    Read the article

  • MS Access MSChart.Graph.8 not printing

    - by Tanj
    Software: Microsoft Access 2007 SP2 Database File Version: Access 2000 I have an access program that I inherited from a previous employee. It uses forms for reports and since I don't have much experience in access I have continued to do this. I have created a copy of the program for another project and modified it to suit. I am having trouble getting more then one chart to print. All the charts display in form view, they all have the same properties (excepting data, position, etc.) For some reason they are not printing. They don't even show up in the print preview. I am thinking it must be something with the graphs themselves as they sometimes lose all information. I have to open the graphs in edit mode and change the data source from column to row and back again so that it gets redrawn. (Refresh doesn't fix it) So right now I don't even have a clue as to where to look so ideas are welcome. Edit #1 It seems to be a problem with linking to an unbound form. Subform Field Linker: Can't build a link between unbound forms. The query for the main form is SELECT tTest.ixTest, tMotorTypes.ixMotorType, tMotorTypes.asMotorType, tMotorTypes.fDeprecated, tTestType.asTest, tTest.asSerialNum, tTest.asOrderNum, tTest.asFrameNum, tTest.asRotorNum, tTest.asOperator, tTest.iStation, tTest.dtTestDate, tTest.ixTestType FROM tMotorTypes INNER JOIN (tTestType INNER JOIN tTest ON tTestType.ixTestType=tTest.ixTestType) ON tMotorTypes.ixMotorType=tTest.ixMotorType; The query for the chart is: SELECT qGraphRSTTemperatures.Frequency, qGraphRSTTemperatures.[Drive End], qGraphRSTTemperatures.[Non Drive End], qGraphRSTTemperatures.[Air In], qGraphRSTTemperatures.Core FROM qGraphRSTTemperatures ORDER BY qGraphRSTTemperatures.ixTemperature; Query qGraphRSTTemperatures: SELECT tElectricalData.dblFrequency AS Frequency, tTemperatures.dblDrvEnd AS [Drive End], tTemperatures.dblNonDrvEnd AS [Non Drive End], tTemperatures.dblAirIn AS [Air In], tTemperatures.dblCore AS Core, tSubTest.ixTest, tTemperatures.ixTemperature FROM (tSubTest INNER JOIN tElectricalData ON tSubTest.ixSubTest = tElectricalData.ixSubTest) LEFT JOIN tTemperatures ON tElectricalData.ixElectrical = tTemperatures.ixElectrical WHERE (((tSubTest.ixSubTestType)=5)) ORDER BY tSubTest.ixTest, tTemperatures.ixTemperature; So how come, in the form view it shows the graph with the correct data when linked thus: Child field: ixTest Master field: ixTest but won't print the graph. The graph will print if I remove the links, but then I have all the data from chart query as it is not limited by ixTest. edit #2 It seems to be a data retrieval/rendering issue in printing. Is there anything in printing that changes the context of records with respect to parent/child relationships?

    Read the article

  • New to asp.net. Need help debugging this email form.

    - by Roeland
    Hey guys, First of all, I am a php developer and most of .net is alien to me which is why I am posting here! I just migrated over a site from one set of webhosting to another. The whole site is written in .net. None of the site is database driven so most of it works, except for the contact form. The output on the site simple states there was an error with "There has been an error - please try to submit the contact form again, if you continue to experience problems, please notify our webmaster." This is just a simple message it pops out of it gets to the "catch" part of the email function. I went into web.config and changed the parameters: <emailaddresses> <add name="System" value="[email protected]"/> <add name="Contact" value="[email protected]"/> <add name="Info" value="[email protected]"/> </emailaddresses> <general> <add name="WebSiteDomain" value="hoyespharmacy.com"/> </general> Then the .cs file for contact contains the mail function EmailFormData(): private void EmailFormData() { try { StringBuilder body = new StringBuilder(); body.Append("Name" + ": " + txtName.Text + "\n\r"); body.Append("Phone" + ": " + txtPhone.Text + "\n\r"); body.Append("Email" + ": " + txtEmail.Text + "\n\r"); body.Append("Fax" + ": " + txtEmail.Text + "\n\r"); body.Append("Subject" + ": " + ddlSubject.SelectedValue + "\n\r"); body.Append("Message" + ": " + txtMessage.Text); MailMessage mail = new MailMessage(); mail.IsBodyHtml = false; mail.To.Add(new MailAddress(Settings.GetEmailAddress("System"))); mail.Subject = "Contact Us Form Submission"; mail.From = new MailAddress(Settings.GetEmailAddress("System"), Settings.WebSiteDomain); mail.Body = body.ToString(); SmtpClient smtpcl = new SmtpClient(); smtpcl.Send(mail); } catch { Utilities.RedirectPermanently(Request.Url.AbsolutePath + "?messageSent=false"); } } How do I see what the actual error is. I figure I can do something with the "catch" part of the function.. Any pointers? Thanks!

    Read the article

  • Text Obfuscation using base64_encode()

    - by user271619
    I'm playing around with encrypt/decrypt coding in php. Interesting stuff! However, I'm coming across some issues involving what text gets encrypted into. Here's 2 functions that encrypt and decrypt a string. It uses an Encryption Key, which I set as something obscure. I actually got this from a php book. I modified it slightly, but not to change it's main goal. I created a small example below that anyone can test. But, I notice that some characters show up as the "encrypted" string. Characters like "=" and "+". Sometimes I pass this encrypted string via the url. Which may not quite make it to my receiving scripts. I'm guessing the browser does something to the string if certain characters are seen. I'm really only guessing. is there another function I can use to ensure the browser doesn't touch the string? or does anyone know enough php bas64_encode() to disallow certain characters from being used? I'm really not going to expect the latter as a possibility. But, I'm sure there's a work-around. enjoy the code, whomever needs it! define('ENCRYPTION_KEY', "sjjx6a"); function encrypt($string) { $result = ''; for($i=0; $i<strlen($string); $i++) { $char = substr($string, $i, 1); $keychar = substr(ENCRYPTION_KEY, ($i % strlen(ENCRYPTION_KEY))-1, 1); $char = chr(ord($char)+ord($keychar)); $result.=$char; } return base64_encode($result)."/".rand(); } function decrypt($string){ $exploded = explode("/",$string); $string = $exploded[0]; $result = ''; $string = base64_decode($string); for($i=0; $i<strlen($string); $i++) { $char = substr($string, $i, 1); $keychar = substr(ENCRYPTION_KEY, ($i % strlen(ENCRYPTION_KEY))-1, 1); $char = chr(ord($char)-ord($keychar)); $result.=$char; } return $result; } echo $encrypted = encrypt("reaplussign.jpg"); echo "<br>"; echo decrypt($encrypted);

    Read the article

  • Unrequired property keeps getting data-val-required attribute

    - by frennky
    This is the model with it's validation: [MetadataType(typeof(TagValidation))] public partial class Tag { } public class TagValidation { [Editable(false)] [HiddenInput(DisplayValue = false)] public int TagId { get; set; } [Required] [StringLength(20)] [DataType(DataType.Text)] public string Name { get; set; } //... } Here is the view: <h2>Create</h2> <script src="@Url.Content("~/Scripts/jquery.validate.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")" type="text/javascript"></script> @using (Html.BeginForm()) { @Html.ValidationSummary(true) <fieldset> <legend>Tag</legend> <div>@Html.EditorForModel()</div> <p> <input type="submit" value="Create" /> </p> </fieldset> } <div> @Html.ActionLink("Back to List", "Index") </div> And here is what get's renderd: <form action="/Tag/Create" method="post"> <fieldset> <legend>Tag</legend> <div><input data-val="true" data-val-number="The field TagId must be a number." data-val-required="The TagId field is required." id="TagId" name="TagId" type="hidden" value="" /> <div class="editor-label"><label for="Name">Name</label></div> <div class="editor-field"><input class="text-box single-line" data-val="true" data-val-length="The field Name must be a string with a maximum length of 20." data-val-length-max="20" data-val-required="The Name field is required." id="Name" name="Name" type="text" value="" /> <span class="field-validation-valid" data-valmsg-for="Name" data-valmsg-replace="true"></span></div> ... </fieldset> </form> The problem is that TagId validation gets generated althoug thare is no Required attribute set on TagId property. Because of that I can't even pass the client-side validation in order to create new Tag in db. What am I missing?

    Read the article

  • Java map / nio / NFS issue causing a VM fault: "a fault occurred in a recent unsafe memory access op

    - by Matthew Bloch
    I have written a parser class for a particular binary format (nfdump if anyone is interested) which uses java.nio's MappedByteBuffer to read through files of a few GB each. The binary format is just a series of headers and mostly fixed-size binary records, which are fed out to the called by calling nextRecord(), which pushes on the state machine, returning null when it's done. It performs well. It works on a development machine. On my production host, it can run for a few minutes or hours, but always seems to throw "java.lang.InternalError: a fault occurred in a recent unsafe memory access operation in compiled Java code", fingering one of the Map.getInt, getShort methods, i.e. a read operation in the map. The uncontroversial (?) code that sets up the map is this: /** Set up the map from the given filename and position */ protected void open() throws IOException { // Set up buffer, is this all the flexibility we'll need? channel = new FileInputStream(file).getChannel(); MappedByteBuffer map1 = channel.map(FileChannel.MapMode.READ_ONLY, 0, channel.size()); map1.load(); // we want the whole thing, plus seems to reduce frequency of crashes? map = map1; // assumes the host writing the files is little-endian (x86), ought to be configurable map.order(java.nio.ByteOrder.LITTLE_ENDIAN); map.position(position); } and then I use the various map.get* methods to read shorts, ints, longs and other sequences of bytes, before hitting the end of the file and closing the map. I've never seen the exception thrown on my development host. But the significant point of difference between my production host and development is that on the former, I am reading sequences of these files over NFS (probably 6-8TB eventually, still growing). On my dev machine, I have a smaller selection of these files locally (60GB), but when it blows up on the production host it's usually well before it gets to 60GB of data. Both machines are running java 1.6.0_20-b02, though the production host is running Debian/lenny, the dev host is Ubuntu/karmic. I'm not convinced that will make any difference. Both machines have 16GB RAM, and are running with the same java heap settings. I take the view that if there is a bug in my code, there is enough of a bug in the JVM not to throw me a proper exception! But I think it is just a particular JVM implementation bug due to interactions between NFS and mmap, possibly a recurrence of 6244515 which is officially fixed. I already tried adding in a "load" call to force the MappedByteBuffer to load its contents into RAM - this seemed to delay the error in the one test run I've done, but not prevent it. Or it could be coincidence that was the longest it had gone before crashing! If you've read this far and have done this kind of thing with java.nio before, what would your instinct be? Right now mine is to rewrite it without nio :)

    Read the article

  • PHP sessions and class members.

    - by JDW
    Ok, messing about with classes in PHP and can't get it to work the way I'm used to as a C++/Java-guy. In the "_init" funtion, if I run a query at the "// query works here" line", everythong works, but in the "getUserID" function, all that happens is said warning... "getUserID" gets called from login.php (they are in the same dir): login.php <?php include_once 'sitehandler.php'; include_once 'dbhandler.php'; session_start(); #TODO: Safer input handling $t_userName = $_POST["name"]; $t_userId = $_SESSION['handler']['db']->getUserID($t_userName); if ($t_userId != -1) { $_SESSION['user']['name'] = $t_userName; $_SESSION['user']['id'] = $t_userId; } //error_log("user: " . $_SESSION['user']['name'] . ", id: ". $_SESSION['user']['id']); header("Location: " . $_SERVER["HTTP_REFERER"]); ? dbhandler.php <?php include_once 'handler.php'; class DBHandler extends HandlerAbstract { private $m_handle; function __construct() { parent::__construct(); } public function test() { #TODO: isdir liquibase #TODO: isfile liquibase-195/liquibase + .bat + execrights $this->m_isTested = true; } public function _init() { if (!$this->isTested()) $this->test(); if (!file_exists('files/data.db')) { #TODO: How to to if host is Windows based? exec('./files/liquibase-1.9.5/liquibase --driver=org.sqlite.JDBC --changeLogFile=files/data_db.xml --url=jdbc:sqlite:files/data.db update'); #TODO: quit if not success } #TODO: Set with default data try { $this->m_handle = new SQLite3('files/data.db'); } catch (Exception $e) { die("<hr />" . $e->getMessage() . "<hr />"); } // query works here $this->m_isSetup = true; } public function teardown() { } public function getUserID($name) { // PHP Warning: SQLite3::prepare(): The SQLite3 object has not been correctly initialised in $t_statement = $this->m_handle->prepare("SELECT id FROM users WHERE name = :name"); $t_statement->bindValue(":name", $name, SQLITE3_TEXT); $t_result = $t_statement->execute(); //var_dump($this->m_handle); return ($t_result)? (int)$t_result['id']: -1; } }

    Read the article

  • do I need to close an audio Clip?

    - by Michael
    have an application that processes real-time data and is supposed to beep when a certain event occurs. The triggering event can occur multiple times per second, and if the beep is already playing when another event triggers the code is just supposed to ignore it (as opposed to interrupting the current beep and starting a new one). Here is the basic code: Clip clickClip public void prepareProcess() { super.prepareProcess(); clickClip = null; try { clipFile = new File("C:/WINDOWS/Media/CHIMES.wav"); ais = AudioSystem.getAudioInputStream(clipFile); clickClip = AudioSystem.getClip(); clickClip.open(ais); fileIsLoaded = true; } catch (Exception ex) { clickClip = null; fileIsLoaded = false; } } public void playSound() { if (fileIsLoaded) { if ((clickClip==null) || (!clickClip.isRunning())) { try { clickClip.setFramePosition(0); clickClip.start(); } catch (Exception ex) { System.out.println("Cannot play click noise"); ex.printStackTrace(); } } } The prepareProcess method gets run once in the beginning, and the playSound method is called every time a triggering event occurs. My question is: do I need to close the clickClip object? I know I could add an actionListener to monitor for a Stop event, but since the event occurs so frequently I'm worried the extra processing is going to slow down the real-time data collection. The code seems to run fine, but my worry is memory leaks. The code above is based on an example I found while searching the net, but the example used an actionListener to close the Clip specifically "to eliminate memory leaks that would occur when the stop method wasn't implemented". My program is intended to run for hours so any memory leaks I have will cause problems. I'll be honest: I have no idea how to verify whether or not I've got a problem. I'm using Netbeans, and running the memory profiler just gave me a huge list of things that I don't know how to read. This is supposed to be the simple part of the program, and I'm spending hours on it. Any help would be greatly appreciated! Michael

    Read the article

  • Multiprogramming in Django, writing to the Database

    - by Marcus Whybrow
    Introduction I have the following code which checks to see if a similar model exists in the database, and if it does not it creates the new model: class BookProfile(): # ... def save(self, *args, **kwargs): uniqueConstraint = {'book_instance': self.book_instance, 'collection': self.collection} # Test for other objects with identical values profiles = BookProfile.objects.filter(Q(**uniqueConstraint) & ~Q(pk=self.pk)) # If none are found create the object, else fail. if len(profiles) == 0: super(BookProfile, self).save(*args, **kwargs) else: raise ValidationError('A Book Profile for that book instance in that collection already exists') I first build my constraints, then search for a model with those values which I am enforcing must be unique Q(**uniqueConstraint). In addition I ensure that if the save method is updating and not inserting, that we do not find this object when looking for other similar objects ~Q(pk=self.pk). I should mention that I ham implementing soft delete (with a modified objects manager which only shows non-deleted objects) which is why I must check for myself rather then relying on unique_together errors. Problem Right thats the introduction out of the way. My problem is that when multiple identical objects are saved in quick (or as near as simultaneous) succession, sometimes both get added even though the first being added should prevent the second. I have tested the code in the shell and it succeeds every time I run it. Thus my assumption is if say we have two objects being added Object A and Object B. Object A runs its check upon save() being called. Then the process saving Object B gets some time on the processor. Object B runs that same test, but Object A has not yet been added so Object B is added to the database. Then Object A regains control of the processor, and has allready run its test, even though identical Object B is in the database, it adds it regardless. My Thoughts The reason I fear multiprogramming could be involved is that each Object A and Object is being added through an API save view, so a request to the view is made for each save, thus not a single request with multiple sequential saves on objects. It might be the case that Apache is creating a process for each request, and thus causing the problems I think I am seeing. As you would expect, the problem only occurs sometimes, which is characteristic of multiprogramming or multiprocessing errors. If this is the case, is there a way to make the test and set parts of the save() method a critical section, so that a process switch cannot happen between the test and the set?

    Read the article

  • How to keep multiple connectionString passwords safe, separate, and easy to deploy?

    - by Funka
    I know there are plenty of questions here already about this topic (I've read through as many as I could find), but I haven't yet been able to figure out how best to satisfy my particular criteria. Here are the goals: The ASP.NET application will run on a few different web servers, including localhost workstations for development. This means encrypting web.config using a machine key is out. The application will decide which connection string to use based on the server name (using a switch statement). For example, "localhost" and "dev.example.com" will use the DevDatabaseConnectionString, "test.example.com" will use the TestDatabaseConnectionString, and "www.example.com" will use the ProdDatabaseConnectionString, for example. Ideally, the exact same executables and web.config should be able to run on any of these environments, without needing to tailor or configure each environment separately every time that we deploy (something that seems like it would be easy to forget/mess up one day during a deployment, which is why we moved away from having just one connectionstring that has to be changed on each target). Deployment is currently accomplished via FTP. We will not have command-line access to the production web server. This means using aspnet_regiis.exe is out. (I could run on localhost, however, if this would still work.) We would prefer to not have to recompile the application whenever a password changes, so using web.config (or db.config or whatever) seems to make the most sense. A developer should not be able to decrypt the production database password. If a developer checks the source code out onto their localhost laptop (which would determine that it should be using the DevDatabaseConnectionString, remember?) and the laptop gets lost or stolen, it should not be possible to get at the other connection strings. Thus, having a single RSA private key to un-encrypt all three passwords cannot be considered. (Contrary to #3 above, it does seem like we'd need to have three separate key files if we went this route; these could be installed once per machine, and should the wrong key file get deployed to the wrong server, the worst that should happen is that the app can't decrypt anything---and not allow the wrong host to access the wrong database!) I know this is probably a subjective question (asking for a "best" way to do something), but given the criteria I've mentioned, I'm hoping that a single best answer will indeed arise. Thank you!

    Read the article

  • Workflow for statistical analysis and report writing

    - by ws
    Does anyone have any wisdom on workflows for data analysis related to custom report writing? The use-case is basically this: Client commissions a report that uses data analysis, e.g. a population estimate and related maps for a water district. The analyst downloads some data, munges the data and saves the result (e.g. adding a column for population per unit, or subsetting the data based on district boundaries). The analyst analyzes the data created in (2), gets close to her goal, but sees that needs more data and so goes back to (1). Rinse repeat until the tables and graphics meet QA/QC and satisfy the client. Write report incorporating tables and graphics. Next year, the happy client comes back and wants an update. This should be as simple as updating the upstream data by a new download (e.g. get the building permits from the last year), and pressing a "RECALCULATE" button, unless specifications change. At the moment, I just start a directory and ad-hoc it the best I can. I would like a more systematic approach, so I am hoping someone has figured this out... I use a mix of spreadsheets, SQL, ARCGIS, R, and Unix tools. Thanks! PS: Below is a basic Makefile that checks for dependencies on various intermediate datasets (w/ ".RData" suffix) and scripts (".R" suffix). Make uses timestamps to check dependencies, so if you 'touch ss07por.csv', it will see that this file is newer than all the files / targets that depend on it, and execute the given scripts in order to update them accordingly. This is still a work in progress, including a step for putting into SQL database, and a step for a templating language like sweave. Note that Make relies on tabs in its syntax, so read the manual before cutting and pasting. Enjoy and give feedback! http://www.gnu.org/software/make/manual/html%5Fnode/index.html#Top R=/home/wsprague/R-2.9.2/bin/R persondata.RData : ImportData.R ../../DATA/ss07por.csv Functions.R $R --slave -f ImportData.R persondata.Munged.RData : MungeData.R persondata.RData Functions.R $R --slave -f MungeData.R report.txt: TabulateAndGraph.R persondata.Munged.RData Functions.R $R --slave -f TabulateAndGraph.R report.txt

    Read the article

  • Is there a way to split a widescreen monitor in to two or more virtual monitors?

    - by Mike Thompson
    Like most developers I have grown to love dual monitors. I won't go into all the reasons for their goodness; just take it as a given. However, they are not perfect. You can never seem to line them up "just right". You always end up with the monitors at slight funny angles. And of course the bezel always gets in the way. And this is with identical monitors. The problem is much worse with different monitors -- VMWare's multi monitor feature won't even work with monitors of differnt resolutions. When you use multiple monnitors, one of them becomes your primary monitor of focus. Your focus may flip from one monitor to the other, but at any point in time you are usually focusing on only one monitor. There are exceptions to this (WinDiff, Excel), but this is generally the case. I suggest that having a single large monitor with all the benefits of multiple smaller monitors would be a better solution. Wide screen monitors are fantastic, but it is hard to use all the space efficiently. If you are writing code you are generally working on the left-hand side of the window. If you maximize an editor on a wide-screen monitor the right-hand side of the window will be a sea of white. Programs like WinSplit Revolution will help to organise your windows, but this is really just addressing the symptom, not the problem. Even with WinSplit Revolution, when you maximise a window it will take up the whole screen. You can't lock a window into a specific section of the screen. This is where virtual monitors comes in. What would be really nice is a video driver that sits on top of the existing driver, but allows a single monitor to be virtualised into multiple monitors. Control Panel would see your single physical monitor as two or more virtual monitors. The software could even support a virtual bezel to emphasise what is happening, or you could opt for seamless mode. Programs like WinSplit Revolution and UltraMon would still work. This virtual video driver would allow you to slice & dice your physical monitor into as many virtual monitors as you want. Does anybody know if such software exists? If not, are there any budding Windows display driver guru's out there willing to take up the challenge? I am not after the myriad of virtual desktop/window manager programs that are available. I get frustrated with these programs. They seem good at first but they usually have some strange behaviour and don't work well with other programs (such as WinSplit Revolution). I want the real thing!

    Read the article

  • JAX-WS wsgen and collections of collections: wsgen broken?

    - by ayang
    I've been playing around with "bottom-up" JAX-WS and have come across something odd when running wsgen. If I have a service class that does something like: @WebService public class Foo { public ArrayList<Bar> getBarList(String baz) { ... } } then running wsgen gets me a FooService_schema1.xsd that has something like this: <xs:complexType name="getBarListResponse"> <xs:sequence> <xs:element name="return" type="tns:bar" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType> which seems reasonable. However, if I want a collection of collections like: public BarCollection getBarCollection(String baz) { ... } // BarCollection is just a container for an ArrayList<Bar> then the generated schema ends up with stuff like: <xs:complexType name="barCollection"> <xs:sequence/> </xs:complexType> <xs:complexType name="getBookCollectionsResponse"> <xs:sequence> <xs:element name="return" type="tns:barCollection" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType> An empty sequence is not what I had in mind at all. My original approach was to go with: public ArrayList<ArrayList<Bar>> getBarLists(String baz) { ... } but that ends up with a big chain of complexTypes that just wind up with an empty sequence at the end as well: <xs:complexType name="getBarListsResponse"> <xs:sequence> <xs:element name="return" type="tns:arrayList" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType> <xs:complexType name="arrayList"> <xs:complexContent> <xs:extension base="tns:abstractList"> <xs:sequence/> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="abstractList" abstract="true"> <xs:complexContent> <xs:extension base="tns:abstractCollection"> <xs:sequence/> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="abstractCollection" abstract="true"> <xs:sequence/> </xs:complexType> Am I missing something or is this a known hole in wsgen? JAXB? Andy

    Read the article

  • JQuery validation not working for checkbox group

    - by Chris Halcrow
    I'm having trouble getting JQuery validation to work with a set of checkboxes. I'm generating the checkboxes using an ASP.NET checkboxlist, and I've used JQuery to set the 'name' attribute to the same thing for each checkbox in the list. Here's the code that gets written to the browser. I'm setting the 'validate' attribute on the 1st checkbox to set the rule that at least one checkbox must be selected. The JQuery validation works for all other elements on the form, but not for the checkbox list. I'm also using a JQuery form wizard on the page which triggers validation for each 'page' of the form, so I don't have control over how the validation is called. <input id="ContentPlaceHolder1_MainContent_AreaOfInterest_0" class="ui-wizard-content ui-helper-reset ui-state-default" type="checkbox" value="Famine" name="hello[]" validate="required:true, minlength:1"> <label for="ContentPlaceHolder1_MainContent_AreaOfInterest_0">Famine</label> <br> <input id="ContentPlaceHolder1_MainContent_AreaOfInterest_1" class="ui-wizard-content ui-helper-reset ui-state-default" type="checkbox" value="Events Volunteer" name="hello[]"> <label for="ContentPlaceHolder1_MainContent_AreaOfInterest_1">Events Volunteer</label> Any ideas on what's going wrong? There are lots of examples of JQuery scripts that will do the validation, however I'm trying to avoid this as I'm generating the checkboxlist server side by a custom control so that it can be re-used across different pages that may or may not have JQuery enabled. I'm trying to enable the JQuery validation whilst being as unobtrusive as possible, so that pages will still work even if JQuery is disabled. Here are the relevant JQuery inclusions and JQuery initialisation script for the form wizard. I'm not using any initialisation code for JQuery validation: <script type="text/javascript" src="../js/formwizard/js/bbq.js"></script> <script type="text/javascript" src="../js/formwizard/js/jquery.form.js"></script> <script type="text/javascript" src="../js/formwizard/js/jquery.form.wizard.js"></script> <script type="text/javascript" src="../js/formwizard/js/jquery.validate.js"></script> <script type="text/javascript"> $(document).ready(function () { $("#form1").formwizard({ validationEnabled: true, focusFirstInput: true }); }); </script>

    Read the article

< Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >