Search Results

Search found 20799 results on 832 pages for 'long integer'.

Page 767/832 | < Previous Page | 763 764 765 766 767 768 769 770 771 772 773 774  | Next Page >

  • Advice on displaying and allowing editing of data using ASP.NET MVC?

    - by Remnant
    I am embarking upon my first ASP.NET MVC project and I would like to get some input on possible ways to display database data and general best practice. In short, the body of my webpage will show data from my database in a table like format, with each table row showing similar data. For example: Name Age Position Date Joined Jon Smith 23 Striker 18th Mar 2005 John Doe 38 Defender 3rd Jan 1988 In terms of functionality, primarily I’d like to give the user the ability to edit the data and, after the edit, commit the edit to the database and refresh the view.The reason I want to refresh the view is because the data is date ordered and I will need to re-sort if the user edits a date field. My main question is what architecture / tools would be best suited to this fulfil my requirements at a high level? From the research I have done so far my initial conclusions were: ADO.NET for data retrieval. This is something I have used before and feel comfortable with. I like the look of LINQ to SQL but don’t want to make the learning curve any steeper for my first outing into MVC land just yet. Partial Views to create a template and then iterate through a datatable that I have pulled back from my database model. jQuery to allow the user to edit data in the table, error check edited data entries etc. Also, my intial view was that caching the data would not be a key requirement here. The only field a user will be able to update is the field and, if they do, I will need to commit that data to the database immediately and then refresh the view (as the data is date sorted). Any thoughts on this? Alternatively, I have seen some jQuery plug-ins that emulate a datagrid and provide associated functionality. My first thoughts are that I do not need all the functionality that comes with these plug-ins (e.g. zebra striping, ability to sort by column using sort glyph in column headers etc .) and I don’t really see any benefit to this over and above the solution I have outlined above. Again, is there reason to reconsider this view? Finally, when a user edits a date , I will need to refresh the view. In order to do this I had been reading about Html.RenderAction and this seemed like it may be a better option than using Partial Views as I can incorporate application logic into the action method. Am I right to consider Html.RenderAction or have I misunderstood its usage? Hope this post is clear and not too long. I did consider separate posts for each topic (e.g. Partial View vs. Html.RenderAction, when to use jQury datagrid plug-in) but it feels like these issues are so intertwined that they need to be dealt with in contect of each other. Thanks

    Read the article

  • user generated / user specific functions

    - by pedalpete
    I'm looking for the most elegant and secure method to do the following. I have a calendar, and groups of users. Users can add events to specific days on the calendar, and specify how long each event lasts for. I've had a few requests from users to add the ability for them to define that events of a specific length include a break, of a certain amount of time, or require that a specific amount of time be left between events. For example, if event is 2 hours, include a 20min break. for each event, require 30 minutes before start of next event. The same group that has asked for an event of 2 hours to include a 20 min break, could also require that an event 3 hours include a 30 minute break. In the end, what the users are trying to get is an elapsed time excluding breaks calculated for them. Currently I provide them a total elapsed time, but they are looking for a running time. However, each of these requests is different for each group. Where one group may want a 30 minute break during a 2 hour event, and another may want only 10 minutes for each 3 hour event. I was kinda thinking I could write the functions into a php file per group, and then include that file and do the calculations via php and then return a calculated total to the user, but something about that doesn't sit right with me. Another option is to output the groups functions to javascript, and have it run client-side, as I'm already returning the duration of the event, but where the user is part of more than one group with different rules, this seems like it could get rather messy. I currently store the start and end time in the database, but no 'durations', and I don't think I should be storing the calculated totals in the db, because if a group decides to change their calculations, I'd need to change it throughout the db. Is there a better way of doing this? I would just store the variables in mysql, but I don't see how I can then say to mysql to calculate based on those variables. I'm REALLY lost here. Any suggestions? I'm hoping somebody has done something similar and can provide some insight into the best direction. If it helps, my table contains eventid, user, group, startDate, startTime, endDate, endTime, type The json for the event which I return to the user is {"eventid":"'.$eventId.'", "user":"'.$userId.'","group":"'.$groupId.'","type":"'.$type.'","startDate":".$startDate.'","startTime":"'.$startTime.'","endDate":"'.$endDate.'","endTime":"'.$endTime.'","durationLength":"'.$duration.'", "durationHrs":"'.$durationHrs.'"} where for example, duration length is 2.5 and duration hours is 2:30.

    Read the article

  • Does oneway declaration in Android .aidl guarantee that method will be called in a separate thread?

    - by Dan Menes
    I am designing a framework for a client/server application for Android phones. I am fairly new to both Java and Android (but not new to programming in general, or threaded programming in particular). Sometimes my server and client will be in the same process, and sometimes they will be in different processes, depending on the exact use case. The client and server interfaces look something like the following: IServer.aidl: package com.my.application; interface IServer { /** * Register client callback object */ void registerCallback( in IClient callbackObject ); /** * Do something and report back */ void doSomething( in String what ); . . . } IClient.aidl: package com.my.application; oneway interface IClient { /** * Receive an answer */ void reportBack( in String answer ); . . . } Now here is where it gets interesting. I can foresee use cases where the client calls IServer.doSomething(), which in turn calls IClient.reportBack(), and on the basis of what is reported back, IClient.reportBack() needs to issue another call to IClient.doSomething(). The issue here is that IServer.doSomething() will not, in general, be reentrant. That's OK, as long as IClient.reportBack() is always invoked in a new thread. In that case, I can make sure that the implementation of IServer.doSomething() is always synchronized appropriately so that the call from the new thread blocks until the first call returns. If everything works the way I think it does, then by declaring the IClient interface as oneway, I guarantee this to be the case. At least, I can't think of any way that the call from IServer.doSomething() to IClient.reportBack() can return immediately (what oneway is supposed to ensure), yet IClient.reportBack still be able to reinvoke IServer.doSomething recursively in the same thread. Either a new thread in IServer must be started, or else the old IServer thread can be re-used for the inner call to IServer.doSomething(), but only after the outer call to IServer.doSomething() has returned. So my question is, does everything work the way I think it does? The Android documentation hardly mentions oneway interfaces.

    Read the article

  • Parse and transform XML with missing elements into table structure

    - by dnlbrky
    I'm trying to parse an XML file. A simplified version of it looks like this: x <- '<grandparent><parent><child1>ABC123</child1><child2>1381956044</child2></parent><parent><child2>1397527137</child2></parent><parent><child3>4675</child3></parent><parent><child1>DEF456</child1><child3>3735</child3></parent><parent><child1/><child3>3735</child3></parent></grandparent>' library(XML) xmlRoot(xmlTreeParse(x)) ## <grandparent> ## <parent> ## <child1>ABC123</child1> ## <child2>1381956044</child2> ## </parent> ## <parent> ## <child2>1397527137</child2> ## </parent> ## <parent> ## <child3>4675</child3> ## </parent> ## <parent> ## <child1>DEF456</child1> ## <child3>3735</child3> ## </parent> ## <parent> ## <child1/> ## <child3>3735</child3> ## </parent> ## </grandparent> I'd like to transform the XML into a data.frame / data.table that looks like this: parent <- data.frame(child1=c("ABC123",NA,NA,"DEF456",NA), child2=c(1381956044, 1397527137, rep(NA, 3)), child3=c(rep(NA, 2), 4675, 3735, 3735)) parent ## child1 child2 child3 ## 1 ABC123 1381956044 NA ## 2 <NA> 1397527137 NA ## 3 <NA> NA 4675 ## 4 DEF456 NA 3735 ## 5 <NA> NA 3735 If each parent node always contained all of the possible elements ("child1", "child2", "child3", etc.), I could use xmlToList and unlist to flatten it, and then dcast to put it into a table. But the XML often has missing child elements. Here is an attempt with incorrect output: library(data.table) ## Flatten: dt <- as.data.table(unlist(xmlToList(x)), keep.rownames=T) setnames(dt, c("column", "value")) ## Add row numbers, but they're incorrect due to missing XML elements: dt[, row:=.SD[,.I], by=column][] column value row 1: parent.child1 ABC123 1 2: parent.child2 1381956044 1 3: parent.child2 1397527137 2 4: parent.child3 4675 1 5: parent.child1 DEF456 2 6: parent.child3 3735 2 7: parent.child3 3735 3 ## Reshape from long to wide, but some value are in the wrong row: dcast.data.table(dt, row~column, value.var="value", fill=NA) ## row parent.child1 parent.child2 parent.child3 ## 1: 1 ABC123 1381956044 4675 ## 2: 2 DEF456 1397527137 3735 ## 3: 3 NA NA 3735 I won't know ahead of time the names of the child elements, or the count of unique element names for children of the grandparent, so the answer should be flexible.

    Read the article

  • Personal Project - Next practical language/tech to learn

    - by Paul Nathan
    I'm working on a personal project doing some finance analysis. It's a totally new field for me, and I'm really having fun with it so far, plus working in the high-level language arena is a great break from my embedded systems daytime work. I have a MySQL backend on a non-local server with a pile of stock data. My task now is to do some analysis of the stocks and produce something approximating a useful result. There are a couple technical difficulties. (1) I have a lot of records. To be precise, I believe I'm near 100K records right now, and this number grows by 6.1K each weekday. I need to create a way to rummage through these fields and do data analysis - based on a given computation, go look at this other set. Fine and dandy, nothing too outre. But this means I could really use a straightforward API for talking to MySQL. (2) Ideally, it runs on OS X 10.4.11. No Windows/Linux machine at home. (3) I can use PHP, C++, Perl, etc. I even have an R installation. I'm pretty flexible with stuff, so long as it runs on OS X. (Lots of options here, pick water, H20, or dihydrogen monoxide ;-) ) (4)Lack of hassle. While I like clever and fun ways of doing things, I'm trying to get some analysis done, not spend ten hours doing installation work and scratching my head figuring out a theoretical syntax question needed to spout out "hello world". What's the question? I'd like to dig into something different than my usual PHP/C++/C toolset. I'm looking for recommendations for languages/technologies that will assist me and meet the above requirements. In particular, I've heard a lot of buzz about F# and Python on SO. I've used CLISP for small problems before, and kinda liked it. I'm seeking opinions about those in particular. edit:since I rent the DB server and have a limited amount of CPU time online, I'm trying to do the analysis on a local machine.

    Read the article

  • WCF XmlSerializer assembly not speeding up first request

    - by Matt Dearing
    I am generating proxy classes to a clients java webservice wsdls and xsd files with svcutil. The first call made to each service proxy class takes a very long time. I was hoping to speed this up by generating the XmlSerializers assembly myself (based on the article How to: Improve the Startup Time of WCF Client Applications using the XmlSerializer), but when I do the first call to each service still takes the same amount of time. Here are the steps I am following: //generate strong name key file sn -k Blah.snk //generate the proxy class file svcutil blah.wsdl blah2.wsdl blah3.wsdl ... base.xsd blah.xsd ... /UseSerializerForFaults /ser:XmlSerializer /n:*,SomeNamespace /out:Blah.cs //compile the class into an assembly signing it with the strong name key file csc /target:library /keyfile:Blah.snk /out:Blah.dll Blah.cs //generate the XmlSerializer code this will give us Blah.XmlSerializers.dll.cs svcutil /t:xmlSerializer Blah.dll //compile the xmlserializer code into its own dll using the same key to sign it and referencing the original dll csc /target:library /keyfile:Blah.snk /out:Blah.XmlSerializers.dll Blah.XmlSerializers.dll.cs /r:Blah.dll I then create a standard Console application that references both Blah.dll and Blah.XmlSerializers.dll. I will then try something like: //BlahProxy is one of the generated service proxy classes BlahProxy p = new BlahProxy(); //this call takes 30ish seconds p.SomeMethod(); BlahProxy p2 = new BlahProxy(); //this call takes < 1 second p2.SomeMethod(); //BlahProx2y is one of the generated service proxy classes BlahProxy2 p3 = new BlahProxy2(); //this call takes 30ish seconds p3.SomeMethod(); BlahProxy2 p4 = new BlahProxy2(); //this call takes < 1 second p4.SomeMethod(); I know that the problem is not server side because I don't see the request made in Fiddler until around 29 seconds. Subsequent calls to each service take < 1 second, so thats why I was hoping the main slow down was the .net runtime generating the xmlserializer code itself, compiling it and loading the assembly. I figured this would be the reason the first call to each service is slow and the rest are fast. Unfortunatley, me generating the code myself is not speeding anything up. Does anyone see what I am doing wrong?

    Read the article

  • How can I troubleshoot an APPCRASH in Internet Explorer?

    - by Schnapple
    I'm writing an ActiveX control using the firebreath framework (hi taxilian!) and while it technically works, I'm running into a weird issue that appears to be unique to me. I've followed the instructions to create a simple plugin and then I ran it in Internet Explorer 8 on Windows 7 x64 (firebreath sets up a test page for the control). But as soon as I try to test it (clicking on a link that fires off JavaScript to interact with the control), IE crashes. Hard. "Internet Explorer has stopped working" style. If I try the control in Firefox (the resulting registered DLL can also be called as a Firefox plugin using a MIME type), it works fine. If I try it on my XP box, it works fine. I emailed the DLL and the testing page to a coworker in the next cube who is like me also running Windows 7 x64 and it works for him just fine as well, so it's not something unique to Windows 7 or x64. When it crashes I get this message: Problem signature: Problem Event Name: APPCRASH Application Name: iexplore.exe Application Version: 8.0.7600.16385 Application Timestamp: 4a5bc69e Fault Module Name: RPCRT4.dll Fault Module Version: 6.1.7600.16385 Fault Module Timestamp: 4a5bdb3b Exception Code: c0000005 Exception Offset: 000220b1 OS Version: 6.1.7600.2.0.0.256.1 Locale ID: 1033 Additional Information 1: 0a9e Additional Information 2: 0a9e372d3b4ad19135b953a78882e789 Additional Information 3: 0a9e Additional Information 4: 0a9e372d3b4ad19135b953a78882e789 Which tells me nothing extremely useful. I can have it attach to a debugger but it just tells me a long list of DLL's, none of which are the ActiveX control in question. It's almost like it's not even getting there. I did a sfc /scannow yesterday to see if anything on my system is corrupt and nothing came up as wrong. I tried various different security levels in IE, but nothing seems to have any effect. As this is a development machine there has been all matter of crap installed on it, so I figure it's bound to be something I've installed since October (when Win7 was released) but I cannot figure out what it is. I presume the information it's giving me when I attach to Visual Studio is useful somehow but I don't know how to interpret it. Admittedly I'm mainly a C#/.NET developer who's a bit out of his element with C/C++ and troubleshooting native code, but does anyone have any advice on how to proceed on figuring out why this very simple ActiveX control crashes IE on my machine and nowhere else?

    Read the article

  • Creating thousands of records in Rails

    - by willCosgrove
    Let me set the stage: My application deals with gift cards. When we create cards they have to have a unique string that the user can use to redeem it with. So when someone orders our gift cards, like a retailer, we need to make a lot of new card objects and store them in the DB. With that in mind, I'm trying to see how quickly I can have my application generate 100,000 Cards. Database expert, I am not, so I need someone to explain this little phenomena: When I create 1000 Cards, it takes 5 seconds. When I create 100,000 cards it should take 500 seconds right? Now I know what you're wanting to see, the card creation method I'm using, because the first assumption would be that it's getting slower because it's checking the uniqueness of a bunch of cards, more as it goes along. But I can show you my rake task desc "Creates cards for a retailer" task :order_cards, [:number_of_cards, :value, :retailer_name] => :environment do |t, args| t = Time.now puts "Searching for retailer" @retailer = Retailer.find_by_name(args[:retailer_name]) puts "Retailer found" puts "Generating codes" value = args[:value].to_i number_of_cards = args[:number_of_cards].to_i codes = [] top_off_codes(codes, number_of_cards) while codes != codes.uniq codes.uniq! top_off_codes(codes, number_of_cards) end stored_codes = Card.all.collect do |c| c.code end while codes != (codes - stored_codes) codes -= stored_codes top_off_codes(codes, number_of_cards) end puts "Codes are unique and generated" puts "Creating bundle" @bundle = @retailer.bundles.create!(:value => value) puts "Bundle created" puts "Creating cards" @bundle.transaction do codes.each do |code| @bundle.cards.create!(:code => code) end end puts "Cards generated in #{Time.now - t}s" end def top_off_codes(codes, intended_number) (intended_number - codes.size).times do codes << ReadableRandom.get(CODE_LENGTH) end end I'm using a gem called readable_random for the unique code. So if you read through all of that code, you'll see that it does all of it's uniqueness testing before it ever starts creating cards. It also writes status updates to the screen while it's running, and it always sits for a while at creating. Meanwhile it flies through the uniqueness tests. So my question to the stackoverflow community is: Why is my database slowing down as I add more cards? Why is this not a linear function in regards to time per card? I'm sure the answer is simple and I'm just a moron who knows nothing about data storage. And if anyone has any suggestions, how would you optimize this method, and how fast do you think you could get it to create 100,000 cards? (When I plotted out my times on a graph and did a quick curve fit to get my line formula, I calculated how long it would take to create 100,000 cards with my current code and it says 5.5 hours. That maybe completely wrong, I'm not sure. But if it stays on the line I curve fitted, it would be right around there.)

    Read the article

  • XSLT Document function returns empty result on Maven POM

    - by user328618
    Greetings! I want to extract some properties from different Maven POMs in a XSLT via the document function. The script itself works fine but the document function returns an empty result for the POM as long as I have the xmlns="http://maven.apache.org/POM/4.0.0" in the project tag. If I remove it, everything works fine. Any idea how the make this work while leaving the xmlns attribute where it belongs or why this doesn't work with the attribute in place? Here comes the relevant portion of my XSLT: <xsl:template match="abcs"> <xsl:variable name="artifactCoordinate" select="abc"/> <xsl:choose> <xsl:when test="document(concat($artifactCoordinate,'-pom.xml'))"> <abc> <ID><xsl:value-of select="$artifactCoordinate"/></ID> <xsl:copy-of select="document(concat($artifactCoordinate,'-pom.xml'))/project/properties"/> </abc> </xsl:when> <xsl:otherwise> <xsl:message terminate="yes"> Transformation failed: POM "<xsl:value-of select="concat($artifactCoordinate,'-pom.xml')"/>" doesn't exist. </xsl:message> </xsl:otherwise> </xsl:choose> And, for completeness, a POM extract with the "bad" attribute: <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <!-- ... --> <properties> <proalpha.version>[5.2a]</proalpha.version> <proalpha.openedge.version>[10.1B]</proalpha.openedge.version> <proalpha.optimierer.version>[1.1]</proalpha.optimierer.version> <proalpha.sonic.version>[7.6.1]</proalpha.sonic.version> </properties> </project>

    Read the article

  • How to produce precisely-timed tone and silence in C#

    - by Bob Denny
    I have a C# project that plays Morse code for RSS feeds. I write it using Managed DirectX, only to discover that Managed DirectX is old and deprecated. The task I have is to play pure sine wave bursts interspersed with silence periods (the code) which are precisely timed as to their duration. I need to be able to call a function which plays a pure tone for so many milliseconds, then Thread.Sleep() then play another, etc. At its fastest, the tones and spaces can be as short as 40ms. It's working quite well in Managed DirectX. To get the precisely timed tone I create 1 sec. of sine wave into a secondary buffer, then to play a tone of a certain duration I seek forward to within x milliseconds of the end of the buffer then play. I've tried System.Media.SoundPlayer. It's a loser because you have to Play(), Sleep(), then Stop() for arbitrary tone lengths. The result is a tone that is too long, variable by CPU load. It takes an indeterminate amount of time to actually stop the tone. I then embarked on a lengthy attempt to use NAudio 1.3. I ended up with a memory resident stream providing the tone data, and again seeking forward leaving the desired length of tone remaining in the stream, then playing. This worked OK on the DirectSoundOut class for a while (see below) but the WaveOut class quickly dies with an internal assert saying that buffers are still on the queue despite PlayerStopped = true. This is odd since I play to the end then put a wait of the same duration between the end of the tone and the start of the next. You'd think that 80ms after starting Play of a 40 ms tone that it wouldn't have buffers on the queue. DirectSoundOut works well for a while, but its problem is that for every tone burst Play() it spins off a separate thread. Eventually (5 min or so) it just stops working. You can see thread after thread after thread exiting in the Output window while running the project in VS2008 IDE. I don't create new objects during playing, I just Seek() the tone stream then call Play() over and over, so I don't think it's a problem with orphaned buffers/whatever piling up till it's choked. I'm out of patience on this one, so I'm asking in the hopes that someone here has faced a similar requirement and can steer me in a direction with a likely solution. Thanks in advance...

    Read the article

  • Determine the 'Overtype' mode using Javascript

    - by Snorkpete
    We are creating a web app to replace an old-school green-screen application. In the green-screen app, as the user presses the Insert key to switch between overtype and insert modes, the cursor changes to indicate which input mode the user is currently in. In IE (which is the official browser of the company), overtype mode also works, but there's no visual indication as to whether overtype mode is on or not, until the user starts typing and possibly over-writes existing information unexpectedly. I'd like to put some sort of visual indicator on the screen if in overtype mode. How can you determine if the browser is in 'overtype mode' from Javascript? Is there some property or function i can query to determine if the browser is in overtype mode? Even an IE-specific solution would be helpful, since our corporate policy dictates the browser to use as IE7 (pure torture, btw). (I do know that one solution is to do check for key presses of the Insert key. However, it's a solution that I'd prefer to avoid since that method seems a bit flaky & error-prone because I can't guarantee what mode the user would be in BEFORE he/she hits my page. ) The reasoning behind this question: The functionality of this portion of the green-screen app is such that the user can select from a list of 'preformatted bodies of text'. crude eg. The excess for this policy is: $xxxxxx and max limit is:$xxxxxx Date of policy is: xx/xx/xxxx and expires : xx/xx/xxxx Some other irrelevant text After selecting this 'preformatted text', the user would then use overtype to replace the x's with actual values, without disturbing the alignment of the rest of the text. (To be clear, they can still edit any part of the 'preformatted text' if they so wished. It's just that usually, they just wish to replace specific portions of the text. Keeping the alignment is important since these sections of text can end up on printed documents.) Of course, the same effect can be achieved by just selecting the x's to replace first, but it would be helpful (with respect to easing the transition to the web app) to allow old methods of doing things to continue to work, while still allowing 'web methods' to be used by the more tech-savvy users. Essentially, we're trying to make the initial transition from the green-screen app to the web app be as seemless as possible to minimise the resistance from the long-time green-screeners.

    Read the article

  • Android app crashes when I change the default xml layout file to another

    - by mib1413456
    I am currently just starting to learn android development and have created a basic "Hello world" app that uses "activity_main.xml" for the default layout. I tried to create a new layout xml file called "new_layout.xml" with a text view, a text field and a button and did the following changes in the MainActivity.java file: setContentView(R.layout.new_layout); I did nothing else expect for adding a new_layout.xml in the res/layout folder, I have tried restarting and cleaning the project but nothing. Below is my activity_main.xml file, new_layout.xml file and MainActivity.java activity_main.xml: <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:id="@+id/container" android:layout_width="match_parent" android:layout_height="match_parent" tools:context="org.example.androidsdk.demo.MainActivity" tools:ignore="MergeRootFrame" /> new_layout.xml: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="horizontal" > <TextView android:id="@+id/textView1" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="TextView" /> <EditText android:id="@+id/editText1" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_weight="1" android:ems="10" > <requestFocus /> </EditText> <Button android:id="@+id/button1" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Button" /> MainActivity.java file package org.example.androidsdk.demo; import android.app.Activity; import android.app.ActionBar; import android.app.Fragment; import android.os.Bundle; import android.view.LayoutInflater; import android.view.Menu; import android.view.MenuItem; import android.view.View; import android.view.ViewGroup; import android.os.Build; public class MainActivity extends Activity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.new_layout); if (savedInstanceState == null) { getFragmentManager().beginTransaction() .add(R.id.container, new PlaceholderFragment()) .commit(); } } @Override public boolean onCreateOptionsMenu(Menu menu) { // Inflate the menu; this adds items to the action bar if it is present. getMenuInflater().inflate(R.menu.main, menu); return true; } @Override public boolean onOptionsItemSelected(MenuItem item) { // Handle action bar item clicks here. The action bar will // automatically handle clicks on the Home/Up button, so long // as you specify a parent activity in AndroidManifest.xml. int id = item.getItemId(); if (id == R.id.action_settings) { return true; } return super.onOptionsItemSelected(item); } /** * A placeholder fragment containing a simple view. */ public static class PlaceholderFragment extends Fragment { public PlaceholderFragment() { } @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { View rootView = inflater.inflate(R.layout.fragment_main, container, false); return rootView; } } }

    Read the article

  • Custom NSView in NSMenuItem not receiving mouse events

    - by Dennis
    I have an NSMenu popping out of an NSStatusItem using popUpStatusItemMenu. These NSMenuItems show a bunch of different links, and each one is connected with setAction: to the openLink: method of a target. This arrangement has been working fine for a long time. The user chooses a link from the menu and the openLink: method then deals with it. Unfortunately, I recently decided to experiment with using NSMenuItem's setView: method to provide a nicer/slicker interface. Basically, I just stopped setting the title, created the NSMenuItem, and then used setView: to display a custom view. This works perfectly, the menu items look great and my custom view is displayed. However, when the user chooses a menu item and releases the mouse, the action no longer works (i.e., openLink: isn't called). If I just simply comment out the setView: call, then the actions work again (of course, the menu items are blank, but the action is executed properly). My first question, then, is why setting a view breaks the NSMenuItem's action. No problem, I thought, I'll fix it by detecting the mouseUp event in my custom view and calling my action method from there. I added this method to my custom view: - (void)mouseUp:(NSEvent *)theEvent { NSLog(@"in mouseUp"); } No dice! This method is never called. I can set tracking rects and receive mouseEntered: events, though. I put a few tests in my mouseEntered routine, as follows: if ([[self window] ignoresMouseEvents]) { NSLog(@"ignoring mouse events"); } else { NSLog(@"not ignoring mouse events"); } if ([[self window] canBecomeKeyWindow]) { dNSLog((@"canBecomeKeyWindow")); } else { NSLog(@"not canBecomeKeyWindow"); } if ([[self window] isKeyWindow]) { dNSLog((@"isKeyWindow")); } else { NSLog(@"not isKeyWindow"); } And got the following responses: not ignoring mouse events canBecomeKeyWindow not isKeyWindow Is this the problem? "not isKeyWindow"? Presumably this isn't good because Apple's docs say "If the user clicks a view that isn’t in the key window, by default the window is brought forward and made key, but the mouse event is not dispatched." But there must be a way do detect these events. HOW? Adding: [[self window] makeKeyWindow]; has no effect, despite the fact that canBecomeKeyWindow is YES.

    Read the article

  • NoHostAvailableException With Cassandra & DataStax Java Driver If Large ResultSet

    - by hughj
    The setup: 2-node Cassandra 1.2.6 cluster replicas=2 very large CQL3 table with no secondary index Rowkey is a UUID.randomUUID().toString() read consistency set to ONE Using DataStax java driver 1.0 The request: Attempting to do a table scan by "SELECT some-col from schema.table LIMIT nnn;" The fail: Once I go beyond a certain nnn LIMIT, I start to get NoHostAvailableExceptions from the driver. It reads like this: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /10.181.13.239 ([/10.181.13.239] Unexpected exception triggered)) at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:64) at com.datastax.driver.core.ResultSetFuture.extractCauseFromExecutionException(ResultSetFuture.java:214) at com.datastax.driver.core.ResultSetFuture.getUninterruptibly(ResultSetFuture.java:169) at com.jpmc.es.rtm.storage.impl.EventExtract.main(EventExtract.java:36) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120) Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /10.181.13.239 ([/10.181.13.239] Unexpected exception triggered)) at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:98) at com.datastax.driver.core.RequestHandler$1.run(RequestHandler.java:165) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) Given: This is probably not the most enlightened thing to do to a large table with millions of rows, but this is how I learn what not to do, so I would really appreciate someone who could volunteer how this kind of error can be debugged. For example, when this happens, there are no indications that the nodes in the cluster ever had an issue with the request (there is nothing in the logs on either node that indicate any timeout or failure). Also, I enabled the trace on the driver, which gives you some nice autotrace (ala Oracle) info as long as the query succeeds. But in this case, the driver blows a NoHostAvailableException and no ExecutionInfo is available, so tracing has not provided any benefit in this case. I also find it interesting that this does not seem to be recorded as a timeout (my JMX consoles tell me no timeouts have occurred). So, I am left not understanding WHERE the failure is actually occurring. I am left with the idea that it is the driver that is having a problem, but I don't know how to debug it (and I would really like to). I have read several posts from folks that state that query'g for resultSets 10000 rows is probably not a good idea, and I am willing to accept this, but I would like to understand what is causing the exception and where the exception is happening. FWIW, I also tried bumping the timeout properties in the cassandra.yaml, but this made no difference whatsoever. I welcome any suggestions, anecdotes, insults, or monetary contributions for my registration in the house of moron-developers. Regards!!

    Read the article

  • Creating meaningful routes in wizard style ASP.NET MVC form

    - by R0MANARMY
    I apologize in advance for a long question, figured better have a bit more information than not enough. I'm working on an application with a fairly complex form (~100 fields on it). In order to make the UI a little more presentable the fields are organized into regions and split across multiple (~10) tabs (not unlike this, but each tab does a submit/redirect to next tab). This large input form can also be in one of 3 views (read only, editable, print friendly). The form represents a large domain object (let's call it Foo). I have a controller for said domain object (FooController). It makes sense to me to have the controller be responsible for all the CRUD related operations. Here are the problems I'm having trouble figuring out. Goals: I'd like to keep to conventions so that Foo/Create creates a new record Foo/Delete deletes a record Foo/Edit/{foo_id} takes you to the first tab of the form ...etc I'd like to be able to not repeat the data access code such that I can have Foo/Edit/{foo_id}/tab1 Foo/View/{foo_id}/tab1 Foo/Print/{foo_id}tab1 ...etc use the same data access code to get the data and just specify which view to use to render it. My current implementation has a massive FooController with Create, Delete, Tab1, Tab2, etc actions. Tab actions are split out into separate files for organization (using partial classes, which may or may not be abuse of partial classes). Problem I'm running into is how to organize my controller(s) and routes to make that happen. I have the default route {controller}/{action}/{id} Which handles goal 1 properly but doesn't quite play nice with goal 2. I tried to address goal 2 by defining extra routes like so: routes.MapRoute( "FooEdit", "Foo/Edit/{id}/{action}", new { controller = "Foo", action = "Tab1", mode = "Edit", id = (string)null } ); routes.MapRoute( "FooView", "Foo/View/{id}/{action}", new { controller = "Foo", action = "Tab1", mode = "View", id = (string)null } ); routes.MapRoute( "FooPrint", "Foo/Print/{id}/{action}", new { controller = "Foo", action = "Tab1", mode = "Print", id = (string)null } ); However defining these extra routes causes the Url.Action to generate routs like Foo/Edit/Create instead of Foo/Create. That leads me to believe I designed something very very wrong, but this is my first attempt an asp.net mvc project and I don't know any better. Any advice with this particular situation would be awesome, but feedback on design in similar projects is welcome.

    Read the article

  • Need to set cursor position to the end of a contentEditable div, issue with selection and range obje

    - by DavidR
    I'm forgetting about cross-browser compatibility for the moment, I just want this to work. What I'm doing is trying to modify a script (and you probably don't need to know this) located at typegreek.com The basic script is found here. Basically what it does is when you type in characters, it converts the character your are typing into greek characters and prints it onto the screen. What I'm trying to do is to get it to work on contentEditable div's (It only works for Textareas) My issue is with this one function: The user types a key, it get's converted to a greek key, and goes to a function, it gets sorted through some if's, and where it ends up is where I can add div support. Here is what I have so far, myField is the div, myValue is the greek character. //Get selection object... var userSelection if (window.getSelection) {userSelection = window.getSelection();} else if (document.selection) {userSelection = document.selection.createRange();} //Now get the cursor position information... var startPos = userSelection.anchorOffset; var endPos = userSelection.focusOffset; var cursorPos = endPos; //Needed later when reinserting the cursor... var rangeObj = userSelection.getRangeAt(0) var container = rangeObj.startContainer //Now take the content from pos 0 -> cursor, add in myValue, then insert everything after myValue to the end of the line. myField.textContent = myField.textContent.substring(0, startPos) + myValue + myField.textContent.substring(endPos, myField.textContent.length); //Now the issue is, this updates the string, and returns the cursor to the beginning of the div. //so that at the next keypress, the character is inserted into the beginning of the div. //So we need to reinsert the cursor where it was. //Re-evaluate the cursor position, taking into account the added character. var cursorPos = endPos + myValue.length; //Set the caracter position. rangeObj.setStart(container,cursorPos) Now, this works only as long as I don't type more than the size of the original text. Say I had 30 characters in the div before hand. If I type more than that 30, it adds character 31, but places the cursor back at 30. I can type character 32 at pos.31, then character 33 at pos.32, but if I try to put character 34 in, it adds the character, and sets the cursor back at 32. The issue is that the function for adding the new character screws up if cursorPos is greater than what is defined in the range. Any ideas?

    Read the article

  • mysql index optimization for a table with multiple indexes that index some of the same columns

    - by Sean
    I have a table that stores some basic data about visitor sessions on third party web sites. This is its structure: id, site_id, unixtime, unixtime_last, ip_address, uid There are four indexes: id, site_id/unixtime, site_id/ip_address, and site_id/uid There are many different types of ways that we query this table, and all of them are specific to the site_id. The index with unixtime is used to display the list of visitors for a given date or time range. The other two are used to find all visits from an IP address or a "uid" (a unique cookie value created for each visitor), as well as determining if this is a new visitor or a returning visitor. Obviously storing site_id inside 3 indexes is inefficient for both write speed and storage, but I see no way around it, since I need to be able to quickly query this data for a given specific site_id. Any ideas on making this more efficient? I don't really understand B-trees besides some very basic stuff, but it's more efficient to have the left-most column of an index be the one with the least variance - correct? Because I considered having the site_id being the second column of the index for both ip_address and uid but I think that would make the index less efficient since the IP and UID are going to vary more than the site ID will, because we only have about 8000 unique sites per database server, but millions of unique visitors across all ~8000 sites on a daily basis. I've also considered removing site_id from the IP and UID indexes completely, since the chances of the same visitor going to multiple sites that share the same database server are quite small, but in cases where this does happen, I fear it could be quite slow to determine if this is a new visitor to this site_id or not. The query would be something like: select id from sessions where uid = 'value' and site_id = 123 limit 1 ... so if this visitor had visited this site before, it would only need to find one row with this site_id before it stopped. This wouldn't be super fast necessarily, but acceptably fast. But say we have a site that gets 500,000 visitors a day, and a particular visitor loves this site and goes there 10 times a day. Now they happen to hit another site on the same database server for the first time. The above query could take quite a long time to search through all of the potentially thousands of rows for this UID, scattered all over the disk, since it wouldn't be finding one for this site ID. Any insight on making this as efficient as possible would be appreciated :) Update - this is a MyISAM table with MySQL 5.0. My concerns are both with performance as well as storage space. This table is both read and write heavy. If I had to choose between performance and storage, my biggest concern is performance - but both are important. We use memcached heavily in all areas of our service, but that's not an excuse to not care about the database design. I want the database to be as efficient as possible.

    Read the article

  • stdio's remove() not always deleting on time.

    - by Kyte
    For a particular piece of homework, I'm implementing a basic data storage system using sequential files under standard C, which cannot load more than 1 record at a time. So, the basic part is creating a new file where the results of whatever we do with the original records are stored. The previous file's renamed, and a new one under the working name is created. The code's compiled with MinGW 5.1.6 on Windows 7. Problem is, this particular version of the code (I've got nearly-identical versions of this floating around my functions) doesn't always remove the old file, so the rename fails and hence the stored data gets wiped by the fopen(). FILE *archivo, *antiguo; remove("IndiceNecesidades.old"); // This randomly fails to work in time. rename("IndiceNecesidades.dat", "IndiceNecesidades.old"); // So rename() fails. antiguo = fopen("IndiceNecesidades.old", "rb"); // But apparently it still gets deleted, since this turns out null (and I never find the .old in my working folder after the program's done). archivo = fopen("IndiceNecesidades.dat", "wb"); // And here the data gets wiped. Basically, anytime the .old previously exists, there's a chance it's not removed in time for the rename() to take effect successfully. No possible name conflicts both internally and externally. The weird thing's that it's only with this particular file. Identical snippets except with the name changed to Necesidades.dat (which happen in 3 different functions) work perfectly fine. // I'm yet to see this snippet fail. FILE *antiguo, *archivo; remove("Necesidades.old"); rename("Necesidades.dat", "Necesidades.old"); antiguo = fopen("Necesidades.old", "rb"); archivo = fopen("Necesidades.dat", "wb"); Any ideas on why would this happen, and/or how can I ensure the remove() command has taken effect by the time rename() is executed? (I thought of just using a while loop to force call remove() again so long as fopen() returns a non-null pointer, but that sounds like begging for a crash due to overflowing the OS with delete requests or something.)

    Read the article

  • How can I use Web Services Core to send a complex type as a parameter to a SOAP API method

    - by Matthew Brindley
    I don't do much Cocoa programming, so I'm probably missing something obvious, so please excuse the basic question. I have a SOAP method that expects a complex type as a paramater. Here's some WSDL: <s:element name="SaveTestResult"> <s:complexType> <s:sequence> <s:element minOccurs="0" maxOccurs="1" name="result" type="tns:TestItemResponse" /> </s:sequence> </s:complexType> </s:element> Here's the definition of the complex type "TestItemResponse": <s:complexType name="TestItemResponse"> <s:sequence> <s:element minOccurs="1" maxOccurs="1" name="TestItemRequestId" type="s:int" /> <s:element minOccurs="1" maxOccurs="1" name="ExternalId" type="s:int" /> <s:element minOccurs="0" maxOccurs="1" name="ApiId" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="InboxGuid" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="SpamResult" type="tns:SpamResult" /> <s:element minOccurs="0" maxOccurs="1" name="ResultImageSet" type="tns:ResultImageSet" /> <s:element minOccurs="1" maxOccurs="1" name="ExclusiveUseMailAccountId" type="s:int" /> <s:element minOccurs="1" maxOccurs="1" name="State" type="tns:TestItemResponseState" /> <s:element minOccurs="0" maxOccurs="1" name="ErrorShortDescription" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="ErrorFullDescription" type="s:string" /> </s:sequence> </s:complexType> I've been using Web Services Core to call a SOAP API method that requires a simple string param, that works great. That same method returns a complex type which WSC converted into nested NSDictionaries, so no problems there. So I assumed I'd be able to convert my local TestItemResponse class into an NSDictionary and then use that as the complex type param. It almost worked, but unfortunately WSC set the object's type as "Dictionary", instead of "TestItemResponse", and the server complained. <TestItemResponse xsi:type=\"SOAP-ENC:Dictionary\"> <ErrorFullDescription xsi:type=\"xsd:string\">foo</ErrorFullDescription> ... I can't seem to find anything that allows you to override the type WSC assigns to the element in the SOAP XML. I've been using code adapted from here, I'm happy to list it, it's just quite long and this is already the longest SO question I've ever posted.

    Read the article

  • To use AES with 256 bits in inbuild java 1.4 api.

    - by sahil garg
    I am able to encrypt with AES 128 but with more key length it fails. code using AES 128 is as below. import java.security.*; import javax.crypto.*; import javax.crypto.spec.*; import java.io.*; /** * This program generates a AES key, retrieves its raw bytes, and * then reinstantiates a AES key from the key bytes. * The reinstantiated key is used to initialize a AES cipher for * encryption and decryption. */ public class AES { /** * Turns array of bytes into string * * @param buf Array of bytes to convert to hex string * @return Generated hex string */ public static String asHex (byte buf[]) { StringBuffer strbuf = new StringBuffer(buf.length * 2); int i; for (i = 0; i < buf.length; i++) { if (((int) buf[i] & 0xff) < 0x10) strbuf.append("0"); strbuf.append(Long.toString((int) buf[i] & 0xff, 16)); } return strbuf.toString(); } public static void main(String[] args) throws Exception { String message="This is just an example"; // Get the KeyGenerator KeyGenerator kgen = KeyGenerator.getInstance("AES"); kgen.init(128); // 192 and 256 bits may not be available // Generate the secret key specs. SecretKey skey = kgen.generateKey(); byte[] raw = skey.getEncoded(); SecretKeySpec skeySpec = new SecretKeySpec(raw, "AES"); // Instantiate the cipher Cipher cipher = Cipher.getInstance("AES"); cipher.init(Cipher.ENCRYPT_MODE, skeySpec); byte[] encrypted =cipher.doFinal("welcome".getBytes()); System.out.println("encrypted string: " + asHex(encrypted)); cipher.init(Cipher.DECRYPT_MODE, skeySpec); byte[] original = cipher.doFinal(encrypted); String originalString = new String(original); System.out.println("Original string: " + originalString + " " + asHex(original)); } }

    Read the article

  • How to include a PHP generated XML file into flash vars, while ALSO passing through the current php functions into it?

    - by Sam
    Hello Given situation: In webpage.php the flashscript is calling a flash script with a flashvar: the playlist file which is a PHP generated XML file: playlist.php, it does that well so long as there are no extra functions in there. Now, in that XML-format playlistfile there needs to be a special function, besides the usual echo("");, namely the very special echo __(""); function that is already declared in webpage.php which needs to do something with the paragraphs residing within that xml file. However, currently the retrieved file misses the function echo __();and says "no such function declared in that xml-format [playlist.php] file". The php functions that are currently included at the very top of webpage.php somehow do not pass-through-the necessary functions into the playlist file for it to recognise how to handle it, in order for that playlist to get those necessary functions working. Apparently these are not passed through automatically/properly when residing in the flashvars?? Cause the echo __(""); works fine when called within webpage.php or via a normal php include(""); if those functions are in a different php file. But not working from the playlist.php file. Any ideas why/what is going on here? I appreciate your clues for this prob +1. Thanks very much. WEBPAGE.PHP the webpage, has at the top an include with functions: <?php include (functions.php); ?> // function that know what to do with echo __("paragraph") <script language="JavaScript" type="text/javascript"> run( 'play', 'true', 'loop', 'true', 'flashvars', 'xmlFile=/incl/playlist.php', // <<<< !! 'wmode', 'transparent', 'allowScriptAccess','sameDomain', ); </script> <noscript> <object classid="blabla"> <param name="allowScriptAccess" value="sameDomain" /> <param name="movie" value="/movies/movie.swf" /> <param name="flashvars" value="xmlFile=/incl/playlist.php" /> // <<< !! <embed src="/movies/movies.swf" type="application/x-shockwave-flash"/> </object> </noscript> PLAYLIST.PHP The PHP generated XML file which is retrieved into the webpage as flash variable (see above) <?php echo ('<?xml version="1.0" encoding="UTF-8"?>'); echo ('<songs>'); echo ('<song version="1. "') . __("boom blue blow bell bowl") . ('/>'); echo ('<song version="2. "') . __("ball bail beam bike base") . ('/>'); echo ('</songs>'); ?>

    Read the article

  • Lightbox image / link URL

    - by GSTAR
    Basically I have a slightly non-standard implementation of FancyBox. By default you have to include a link to the large version of the image so that the Lightbox can display it. However, in my implementation, the image link URLs point to a script rather than directly to the image file. So for example, instead of: <a href="mysite/images/myimage.jpg" rel="gallery"> I have: <a href="mysite/photos/view/abc123" rel="gallery"> The above URL points to a function: public function actionPhotos($view) { $photo=Photo::model()->find('name=:name', array(':name'=>$view)); if(!empty($photo)) { $this->renderPartial('_photo', array('photo'=>$photo, true)); } } The "$this-renderPartial()" bit simply calls a layout file which includes a standard HTML tag to output. Now when the user clicks on a thumbnail, the above function is called and the large image is displayed in the Lightbox. Now if the user right clicks on the thumbnail and selects "open in new tab/window" then the image is displayed in the browser as per normal, i.e. just the image. I want to change this so that it displays the image within a layout. In the above code I can include the following and put it in an IF statement: $this->render('photos', array('photo'=>$photo)); This will call the layout file "photos" which contains the layout to display the image in. I have a specific limitation for this - the image URL must remain the same, i.e. no additional GET variables in the URL. However if we can pass in a GET variable in the background then that is OK. I will most likely need to change my function above so that it calls a different file for this functionality. EDIT: To demonstrate exactly what I am trying to do, check out the following: http://www.starnow.co.uk/KimberleyMarren Go to the photos tab and hover over a thumbnail - note the URL. Click the thumbnail and it will open up in the Lightbox. Next right click on that same thumbnail and select "open in new tab/new window". You will notice that the image is now displayed in a layout. So that same URL is used for displaying the image in the Lightbox and on its own page. The way StarNow have done this is using some crazy long JavaScript functionality, which I'm not too keen on replicating.

    Read the article

  • Synchronized IEnumerator<T>

    - by Dan Bryant
    I'm putting together a custom SynchronizedCollection<T> class so that I can have a synchronized Observable collection for my WPF application. The synchronization is provided via a ReaderWriterLockSlim, which, for the most part, has been easy to apply. The case I'm having trouble with is how to provide thread-safe enumeration of the collection. I've created a custom IEnumerator<T> nested class that looks like this: private class SynchronizedEnumerator : IEnumerator<T> { private SynchronizedCollection<T> _collection; private int _currentIndex; internal SynchronizedEnumerator(SynchronizedCollection<T> collection) { _collection = collection; _collection._lock.EnterReadLock(); _currentIndex = -1; } #region IEnumerator<T> Members public T Current { get; private set;} #endregion #region IDisposable Members public void Dispose() { var collection = _collection; if (collection != null) collection._lock.ExitReadLock(); _collection = null; } #endregion #region IEnumerator Members object System.Collections.IEnumerator.Current { get { return Current; } } public bool MoveNext() { var collection = _collection; if (collection == null) throw new ObjectDisposedException("SynchronizedEnumerator"); _currentIndex++; if (_currentIndex >= collection.Count) { Current = default(T); return false; } Current = collection[_currentIndex]; return true; } public void Reset() { if (_collection == null) throw new ObjectDisposedException("SynchronizedEnumerator"); _currentIndex = -1; Current = default(T); } #endregion } My concern, however, is that if the Enumerator is not Disposed, the lock will never be released. In most use cases, this is not a problem, as foreach should properly call Dispose. It could be a problem, however, if a consumer retrieves an explicit Enumerator instance. Is my only option to document the class with a caveat implementer reminding the consumer to call Dispose if using the Enumerator explicitly or is there a way to safely release the lock during finalization? I'm thinking not, since the finalizer doesn't even run on the same thread, but I was curious if there other ways to improve this. EDIT After thinking about this a bit and reading the responses (particular thanks to Hans), I've decided this is definitely a bad idea. The biggest issue actually isn't forgetting to Dispose, but rather a leisurely consumer creating deadlock while enumerating. I now only read-lock long enough to get a copy and return the enumerator for the copy.

    Read the article

  • Faster Matrix Multiplication in C#

    - by Kyle Lahnakoski
    I have as small c# project that involves matrices. I am processing large amounts of data by splitting it into n-length chunks, treating the chucks as vectors, and multiplying by a Vandermonde** matrix. The problem is, depending on the conditions, the size of the chucks and corresponding Vandermonde** matrix can vary. I have a general solution which is easy to read, but way too slow: public byte[] addBlockRedundancy(byte[] data) { if (data.Length!=numGood) D.error("Expecting data to be just "+numGood+" bytes long"); aMatrix d=aMatrix.newColumnMatrix(this.mod, data); var r=vandermonde.multiplyBy(d); return r.ToByteArray(); }//method This can process about 1/4 megabytes per second on my i5 U470 @ 1.33GHz. I can make this faster by manually inlining the matrix multiplication: int o=0; int d=0; for (d=0; d<data.Length-numGood; d+=numGood) { for (int r=0; r<numGood+numRedundant; r++) { Byte value=0; for (int c=0; c<numGood; c++) { value=mod.Add(value, mod.Multiply(vandermonde.get(r, c), data[d+c])); }//for output[r][o]=value; }//for o++; }//for This can process about 1 meg a second. (Please note the "mod" is performing operations over GF(2^8) modulo my favorite irreducible polynomial.) I know this can get a lot faster: After all, the Vandermonde** matrix is mostly zeros. I should be able to make a routine, or find a routine, that can take my matrix and return a optimized method which will effectively multiply vectors by the given matrix, but faster. Then, when I give this routine a 5x5 Vandermonde matrix (the identity matrix), there is simply no arithmetic to perform, and the original data is just copied. ** Please note: What I use the term "Vandermonde", I actually mean an Identity matrix with some number of rows from the Vandermonde matrix appended (see comments). This matrix is wonderful because of all the zeros, and because if you remove enough rows (of your choosing) to make it square, it is an invertible matrix. And, of course, I would like to use this same routine to convert any one of those inverted matrices into an optimized series of instructions. How can I make this matrix multiplication faster? Thanks! (edited to correct my mistake with Vandermonde matrix)

    Read the article

  • vb.net .aspxauth

    - by Morgan
    I am working with a large site trying to implement web parts for particular users in a particular subdirectory but I can't get the .ASPXAUTH cookie to be recognized. I've read dozens of tutorials and MS class library pages that tell me how it should work to no avail. I am brand new to Web parts, so I'm sorry if I'm unclear. The idea is that logged in users can travel the site, but then when they go to their dashboard, they are programmatically authenticated using Membership and FormsAuthentication to pull up their Personalization. When I step through the code, I can see the cookie being set, and that it exists on the following page, but Membership.GetUser() and User.Identity are both empty. I know the user exists because I created it programmatically using Membership.CreateUser() and I can see it when I do Membership.GetAllUsers() and it's online when i use Membership.GetUser(username) but the Personalization doesn't work. Right now, I'm just trying to get the proof of concept going. I've tried creating the ticket and cookie myself, and also using SetAuthCookie() (code follows). I really just need a clue as to what to look for. Here's the "login" page... If Membership.ValidateUser(testusername, testpassword) Then -- Works FormsAuthentication.SetAuthCookie(testusername, true) Response.Redirect("webpartsdemo1.aspx", False) End If And the next page (webpartsdemo1.aspx) Dim cookey As String = ".ASPXAUTH" lblContent.Text &= "<br><br>" & Request.Cookies(cookey).Name & " Details" lblContent.Text &= "<br>path = " & Request.Cookies(cookey).Path lblContent.Text &= "<br>domain = " & Request.Cookies(cookey).Domain lblContent.Text &= "<br>expires = " & Request.Cookies(cookey).Expires lblContent.Text &= "<br>Secure only? " & Request.Cookies(cookey).Secure lblContent.Text &= "<br>HTTP only? = " & Request.Cookies(cookey).HttpOnly lblContent.Text &= "<br>Has subkeys? " & Request.Cookies(cookey).HasKeys lblContent.Text &= "<br/><br/>request authenticated? " & Request.IsAuthenticated.ToString lblContent.Text &= " Getting user<br/>Current User: " Dim muGidget As MembershipUser If Request.IsAuthenticated Then muGidget = Membership.GetUser lblContent.Text &= Membership.GetUser().UserName Else lblContent.Text &= "none found" End If Output: .ASPXAUTH Details path = / domain = expires = 12:00:00 AM Secure only? False HTTP only? = False Has subkeys? False request authenticated? False Getting user Current User: none found Sorry to go on so long. Thanks for any help you can provide.

    Read the article

< Previous Page | 763 764 765 766 767 768 769 770 771 772 773 774  | Next Page >