Search Results

Search found 39788 results on 1592 pages for 'action method'.

Page 1516/1592 | < Previous Page | 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523  | Next Page >

  • Optimizing AES modes on Solaris for Intel Westmere

    - by danx
    Optimizing AES modes on Solaris for Intel Westmere Review AES is a strong method of symmetric (secret-key) encryption. It is a U.S. FIPS-approved cryptographic algorithm (FIPS 197) that operates on 16-byte blocks. AES has been available since 2001 and is widely used. However, AES by itself has a weakness. AES encryption isn't usually used by itself because identical blocks of plaintext are always encrypted into identical blocks of ciphertext. This encryption can be easily attacked with "dictionaries" of common blocks of text and allows one to more-easily discern the content of the unknown cryptotext. This mode of encryption is called "Electronic Code Book" (ECB), because one in theory can keep a "code book" of all known cryptotext and plaintext results to cipher and decipher AES. In practice, a complete "code book" is not practical, even in electronic form, but large dictionaries of common plaintext blocks is still possible. Here's a diagram of encrypting input data using AES ECB mode: Block 1 Block 2 PlainTextInput PlainTextInput | | | | \/ \/ AESKey-->(AES Encryption) AESKey-->(AES Encryption) | | | | \/ \/ CipherTextOutput CipherTextOutput Block 1 Block 2 What's the solution to the same cleartext input producing the same ciphertext output? The solution is to further process the encrypted or decrypted text in such a way that the same text produces different output. This usually involves an Initialization Vector (IV) and XORing the decrypted or encrypted text. As an example, I'll illustrate CBC mode encryption: Block 1 Block 2 PlainTextInput PlainTextInput | | | | \/ \/ IV >----->(XOR) +------------->(XOR) +---> . . . . | | | | | | | | \/ | \/ | AESKey-->(AES Encryption) | AESKey-->(AES Encryption) | | | | | | | | | \/ | \/ | CipherTextOutput ------+ CipherTextOutput -------+ Block 1 Block 2 The steps for CBC encryption are: Start with a 16-byte Initialization Vector (IV), choosen randomly. XOR the IV with the first block of input plaintext Encrypt the result with AES using a user-provided key. The result is the first 16-bytes of output cryptotext. Use the cryptotext (instead of the IV) of the previous block to XOR with the next input block of plaintext Another mode besides CBC is Counter Mode (CTR). As with CBC mode, it also starts with a 16-byte IV. However, for subsequent blocks, the IV is just incremented by one. Also, the IV ix XORed with the AES encryption result (not the plain text input). Here's an illustration: Block 1 Block 2 PlainTextInput PlainTextInput | | | | \/ \/ AESKey-->(AES Encryption) AESKey-->(AES Encryption) | | | | \/ \/ IV >----->(XOR) IV + 1 >---->(XOR) IV + 2 ---> . . . . | | | | \/ \/ CipherTextOutput CipherTextOutput Block 1 Block 2 Optimization Which of these modes can be parallelized? ECB encryption/decryption can be parallelized because it does more than plain AES encryption and decryption, as mentioned above. CBC encryption can't be parallelized because it depends on the output of the previous block. However, CBC decryption can be parallelized because all the encrypted blocks are known at the beginning. CTR encryption and decryption can be parallelized because the input to each block is known--it's just the IV incremented by one for each subsequent block. So, in summary, for ECB, CBC, and CTR modes, encryption and decryption can be parallelized with the exception of CBC encryption. How do we parallelize encryption? By interleaving. Usually when reading and writing data there are pipeline "stalls" (idle processor cycles) that result from waiting for memory to be loaded or stored to or from CPU registers. Since the software is written to encrypt/decrypt the next data block where pipeline stalls usually occurs, we can avoid stalls and crypt with fewer cycles. This software processes 4 blocks at a time, which ensures virtually no waiting ("stalling") for reading or writing data in memory. Other Optimizations Besides interleaving, other optimizations performed are Loading the entire key schedule into the 128-bit %xmm registers. This is done once for per 4-block of data (since 4 blocks of data is processed, when present). The following is loaded: the entire "key schedule" (user input key preprocessed for encryption and decryption). This takes 11, 13, or 15 registers, for AES-128, AES-192, and AES-256, respectively The input data is loaded into another %xmm register The same register contains the output result after encrypting/decrypting Using SSSE 4 instructions (AESNI). Besides the aesenc, aesenclast, aesdec, aesdeclast, aeskeygenassist, and aesimc AESNI instructions, Intel has several other instructions that operate on the 128-bit %xmm registers. Some common instructions for encryption are: pxor exclusive or (very useful), movdqu load/store a %xmm register from/to memory, pshufb shuffle bytes for byte swapping, pclmulqdq carry-less multiply for GCM mode Combining AES encryption/decryption with CBC or CTR modes processing. Instead of loading input data twice (once for AES encryption/decryption, and again for modes (CTR or CBC, for example) processing, the input data is loaded once as both AES and modes operations occur at in the same function Performance Everyone likes pretty color charts, so here they are. I ran these on Solaris 11 running on a Piketon Platform system with a 4-core Intel Clarkdale processor @3.20GHz. Clarkdale which is part of the Westmere processor architecture family. The "before" case is Solaris 11, unmodified. Keep in mind that the "before" case already has been optimized with hand-coded Intel AESNI assembly. The "after" case has combined AES-NI and mode instructions, interleaved 4 blocks at-a-time. « For the first table, lower is better (milliseconds). The first table shows the performance improvement using the Solaris encrypt(1) and decrypt(1) CLI commands. I encrypted and decrypted a 1/2 GByte file on /tmp (swap tmpfs). Encryption improved by about 40% and decryption improved by about 80%. AES-128 is slighty faster than AES-256, as expected. The second table shows more detail timings for CBC, CTR, and ECB modes for the 3 AES key sizes and different data lengths. » The results shown are the percentage improvement as shown by an internal PKCS#11 microbenchmark. And keep in mind the previous baseline code already had optimized AESNI assembly! The keysize (AES-128, 192, or 256) makes little difference in relative percentage improvement (although, of course, AES-128 is faster than AES-256). Larger data sizes show better improvement than 128-byte data. Availability This software is in Solaris 11 FCS. It is available in the 64-bit libcrypto library and the "aes" Solaris kernel module. You must be running hardware that supports AESNI (for example, Intel Westmere and Sandy Bridge, microprocessor architectures). The easiest way to determine if AES-NI is available is with the isainfo(1) command. For example, $ isainfo -v 64-bit amd64 applications pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu No special configuration or setup is needed to take advantage of this software. Solaris libraries and kernel automatically determine if it's running on AESNI-capable machines and execute the correctly-tuned software for the current microprocessor. Summary Maximum throughput of AES cipher modes can be achieved by combining AES encryption with modes processing, interleaving encryption of 4 blocks at a time, and using Intel's wide 128-bit %xmm registers and instructions. References "Block cipher modes of operation", Wikipedia Good overview of AES modes (ECB, CBC, CTR, etc.) "Advanced Encryption Standard", Wikipedia "Current Modes" describes NIST-approved block cipher modes (ECB,CBC, CFB, OFB, CCM, GCM)

    Read the article

  • Best way to handle multiple tables to replace one big table in Rails? (e.g. 'Books1', 'Books2', etc.

    - by mikep
    Hello, I've decided to use multiple tables for an entity (e.g. Books1, Books2, Books3, etc.), instead of just one main table which could end up having a lot of rows (e.g. just Books). I'm doing this to try and to avoid a potential future performance drop that could come with having too many rows in one table. With that, I'm looking for a good way to handle this in Rails, mainly by trying to avoid loading a bunch of unused associations. (I know that I could use a partition for this, but, for now, I've decided to go the 'multiple tables' route.) Each user has their books placed into a specific table. The actual book table is chosen when the user is created, and all of their books go into the same table. I'm going to split the adds across the tables. The goal is to try and keep each table pretty much even -- but that's a different issue. One thing I don't particularly want to have is a bunch of unused associations in the User class. Right now, it looks like I'd have to do the following: class User < ActiveRecord::Base has_many :books1, :books2, :books3, :books4, :books5 end class Books1 < ActiveRecord::Base belongs_to :user end class Books2 < ActiveRecord::Base belongs_to :user end class Books3 < ActiveRecord::Base belongs_to :user end I'm assuming that the main performance hit would come in terms of memory and possibly some method call overhead for each User object, since it has to load all of those associations, which in turn creates all of those nice, dynamic model accessor methods like User.find_by_. But for each specific user, only one of the book tables would be usable/applicable, since all of a user's books are stored in the same table. So, only one of the associations would be in use at any time and any other has_many :bookX association that was loaded would be a waste. For example, with a user.id of 2, I'd only need books3.find_by_author('Author'), but the way I'm thinking of setting this up, I'd still have access to Books1..n. I don't really know Ruby/Rails does internally with all of those has_many associations though, so maybe it's not so bad. But right now I'm thinking that it's really wasteful, and that there may just be a better, more efficient way of doing this. So, a few questions: 1) Is there's some sort of special Ruby/Rails methodology that could be applied to this 'multiple tables to represent one entity' scheme? Are there any 'best practices' for this? 2) Is it really bad to have so many unused has_many associations for each object? Is there a better way to do this? 3) Does anyone have any advice on how to abstract the fact that there's multiple book tables behind a single books model/class? For example, so I can call books.find_by_author('Author') instead of books3.find_by_author('Author'). Thank you!

    Read the article

  • RIA Services Repository Save does not work!?

    - by Savvas Sopiadis
    Hello everybody! Doing my first SL4 MVVM RIA based application and i ran into the following situation: updating a record (EF4,NO-POCOS!!) in the SL-client seems to take place, but values in the dbms are unchanged. Debugging with Fiddler the message on save is (amongst others): EntityActions.nil? b9http://schemas.microsoft.com/2003/10/Serialization/Arrays^HasMemberChanges?^Id?^ Operation?Update I assume that this says only: hey! the dbms should do an update on this record, AND nothing more! Is that right?! I 'm using a generic repository like this: public class Repository<T> : IRepository<T> where T : class { IObjectSet<T> _objectSet; IObjectContext _objectContext; public Repository(IObjectContext objectContext) { this._objectContext = objectContext; _objectSet = objectContext.CreateObjectSet<T>(); } public IQueryable<T> AsQueryable() { return _objectSet; } public IEnumerable<T> GetAll() { return _objectSet.ToList(); } public IEnumerable<T> Find(Expression<Func<T, bool>> where) { return _objectSet.Where(where); } public T Single(Expression<Func<T, bool>> where) { return _objectSet.Single(where); } public T First(Expression<Func<T, bool>> where) { return _objectSet.First(where); } public void Delete(T entity) { _objectSet.DeleteObject(entity); } public void Add(T entity) { _objectSet.AddObject(entity); } public void Attach(T entity) { _objectSet.Attach(entity); } public void Save() { _objectContext.SaveChanges(); } } The DomainService Update Method is the following: [Update] public void UpdateCulture(Culture currentCulture) { if (currentCulture.EntityState == System.Data.EntityState.Detached) { this.cultureRepository.Attach(currentCulture); } this.cultureRepository.Save(); } I know that the currentCulture-Entity is detached. What confuses me (amongst other things) is this: is the _objectContext still alive? (which means it "will be"??? aware of the changes made to record, so simply calling Attach() and then Save() should be enough!?!?) What am i missing? Development Environment: VS2010RC - Entity Framework 4 (no POCOs) Thanks in advance

    Read the article

  • C# Confusing Results from Performance Test

    - by aip.cd.aish
    I am currently working on an image processing application. The application captures images from a webcam and then does some processing on it. The app needs to be real time responsive (ideally < 50ms to process each request). I have been doing some timing tests on the code I have and I found something very interesting (see below). clearLog(); log("Log cleared"); camera.QueryFrame(); camera.QueryFrame(); log("Camera buffer cleared"); Sensor s = t.val; log("Sx: " + S.X + " Sy: " + S.Y); Image<Bgr, Byte> cameraImage = camera.QueryFrame(); log("Camera output acuired for processing"); Each time the log is called the time since the beginning of the processing is displayed. Here is my log output: [3 ms]Log cleared [41 ms]Camera buffer cleared [41 ms]Sx: 589 Sy: 414 [112 ms]Camera output acuired for processing The timings are computed using a StopWatch from System.Diagonostics. QUESTION 1 I find this slightly interesting, since when the same method is called twice it executes in ~40ms and when it is called once the next time it took longer (~70ms). Assigning the value can't really be taking that long right? QUESTION 2 Also the timing for each step recorded above varies from time to time. The values for some steps are sometimes as low as 0ms and sometimes as high as 100ms. Though most of the numbers seem to be relatively consistent. I guess this may be because the CPU was used by some other process in the mean time? (If this is for some other reason, please let me know) Is there some way to ensure that when this function runs, it gets the highest priority? So that the speed test results will be consistently low (in terms of time). EDIT I change the code to remove the two blank query frames from above, so the code is now: clearLog(); log("Log cleared"); Sensor s = t.val; log("Sx: " + S.X + " Sy: " + S.Y); Image<Bgr, Byte> cameraImage = camera.QueryFrame(); log("Camera output acuired for processing"); The timing results are now: [2 ms]Log cleared [3 ms]Sx: 589 Sy: 414 [5 ms]Camera output acuired for processing The next steps now take longer (sometimes, the next step jumps to after 20-30ms, while the next step was previously almost instantaneous). I am guessing this is due to the CPU scheduling. Is there someway I can ensure the CPU does not get scheduled to do something else while it is running through this code?

    Read the article

  • How to correctly use ABPersonViewController with ABPeoplePickerNavigationController to view Contact

    - by Maha
    I'm attempting to add a feature to my app that allows the user to select a contact from an ABPeoplePickerNavigationController, which then displays an ABPersonViewController corresponding to the contact they picked. At that point, I want the user to be able to click on a contact's phone number and have my app respond with custom behavior. I've got the ABPeoplePickerNavigationController working fine, but I'm running into a problem displaying the ABPersonViewController. I can get the ABPersonViewController to animate onto the screen just fine, but it only displays the contact's photo, name, and company name. None of the contact's other fields are displayed. I'm using the 'displayedProperties' element in the ABPersonViewController to tell the program to display phone numbers. This creates some strange behavior; when I select a contact that has no phone numbers assigned, the contact shows up with "No Phone Numbers" written in the background (as you'd expect), but when selecting a contact that does have a phone number, all I get is a blank contact page (without the "No Phone Numbers" text). Here's the method in my ABPeoplePickerNavigationController delegate class that I'm using to create my PersonViewController class, which implements the ABPersonViewController interface: - (BOOL) peoplePickerNavigationController:(ABPeoplePickerNavigationController *)peoplePicker shouldContinueAfterSelectingPerson:(ABRecordRef)person { BOOL returnState = NO; PersonViewController *personView = [[PersonViewController alloc] init]; [personView displayContactInfo:person]; [peoplePicker pushViewController:personView animated:YES]; [personView release]; return returnState; } Here's my PersonViewController.h header file: #import <UIKit/UIKit.h> #import <Foundation/Foundation.h> #import <AddressBookUI/AddressBookUI.h> @interface PersonViewController : UIViewController <ABPersonViewControllerDelegate> { } - (void) displayContactInfo: (ABRecordRef)person; @end Finally, here's my PersonViewController.m that's creating the ABPersonViewController to view the selected contact: #import "PersonViewController.h" @implementation PersonViewController - (void) displayContactInfo: (ABRecordRef)person { ABPersonViewController *personController = [[ABPersonViewController alloc] init]; personController.personViewDelegate = self; personController.allowsEditing = NO; personController.displayedPerson = person; personController.addressBook = ABAddressBookCreate(); personController.displayedProperties = [NSArray arrayWithObjects: [NSNumber numberWithInt:kABPersonPhoneProperty], nil]; [self setView:personController.view]; [[self navigationController] pushViewController:personController animated:YES]; [personController release]; } - (BOOL) personViewController:(ABPersonViewController*)personView shouldPerformDefaultActionForPerson:(ABRecordRef)person property:(ABPropertyID)property identifier:(ABMultiValueIdentifier)identifierForValue { return YES; } @end Does anyone have any idea as to why I'm seeing this blank Contact screen instead of one with clickable phone number fields?

    Read the article

  • Thread placement policies on NUMA systems - update

    - by Dave
    In a prior blog entry I noted that Solaris used a "maximum dispersal" placement policy to assign nascent threads to their initial processors. The general idea is that threads should be placed as far away from each other as possible in the resource topology in order to reduce resource contention between concurrently running threads. This policy assumes that resource contention -- pipelines, memory channel contention, destructive interference in the shared caches, etc -- will likely outweigh (a) any potential communication benefits we might achieve by packing our threads more densely onto a subset of the NUMA nodes, and (b) benefits of NUMA affinity between memory allocated by one thread and accessed by other threads. We want our threads spread widely over the system and not packed together. Conceptually, when placing a new thread, the kernel picks the least loaded node NUMA node (the node with lowest aggregate load average), and then the least loaded core on that node, etc. Furthermore, the kernel places threads onto resources -- sockets, cores, pipelines, etc -- without regard to the thread's process membership. That is, initial placement is process-agnostic. Keep reading, though. This description is incorrect. On Solaris 10 on a SPARC T5440 with 4 x T2+ NUMA nodes, if the system is otherwise unloaded and we launch a process that creates 20 compute-bound concurrent threads, then typically we'll see a perfect balance with 5 threads on each node. We see similar behavior on an 8-node x86 x4800 system, where each node has 8 cores and each core is 2-way hyperthreaded. So far so good; this behavior seems in agreement with the policy I described in the 1st paragraph. I recently tried the same experiment on a 4-node T4-4 running Solaris 11. Both the T5440 and T4-4 are 4-node systems that expose 256 logical thread contexts. To my surprise, all 20 threads were placed onto just one NUMA node while the other 3 nodes remained completely idle. I checked the usual suspects such as processor sets inadvertently left around by colleagues, processors left offline, and power management policies, but the system was configured normally. I then launched multiple concurrent instances of the process, and, interestingly, all the threads from the 1st process landed on one node, all the threads from the 2nd process landed on another node, and so on. This happened even if I interleaved thread creating between the processes, so I was relatively sure the effect didn't related to thread creation time, but rather that placement was a function of process membership. I this point I consulted the Solaris sources and talked with folks in the Solaris group. The new Solaris 11 behavior is intentional. The kernel is no longer using a simple maximum dispersal policy, and thread placement is process membership-aware. Now, even if other nodes are completely unloaded, the kernel will still try to pack new threads onto the home lgroup (socket) of the primordial thread until the load average of that node reaches 50%, after which it will pick the next least loaded node as the process's new favorite node for placement. On the T4-4 we have 64 logical thread contexts (strands) per socket (lgroup), so if we launch 48 concurrent threads we will find 32 placed on one node and 16 on some other node. If we launch 64 threads we'll find 32 and 32. That means we can end up with our threads clustered on a small subset of the nodes in a way that's quite different that what we've seen on Solaris 10. So we have a policy that allows process-aware packing but reverts to spreading threads onto other nodes if a node becomes too saturated. It turns out this policy was enabled in Solaris 10, but certain bugs suppressed the mixed packing/spreading behavior. There are configuration variables in /etc/system that allow us to dial the affinity between nascent threads and their primordial thread up and down: see lgrp_expand_proc_thresh, specifically. In the OpenSolaris source code the key routine is mpo_update_tunables(). This method reads the /etc/system variables and sets up some global variables that will subsequently be used by the dispatcher, which calls lgrp_choose() in lgrp.c to place nascent threads. Lgrp_expand_proc_thresh controls how loaded an lgroup must be before we'll consider homing a process's threads to another lgroup. Tune this value lower to have it spread your process's threads out more. To recap, the 'new' policy is as follows. Threads from the same process are packed onto a subset of the strands of a socket (50% for T-series). Once that socket reaches the 50% threshold the kernel then picks another preferred socket for that process. Threads from unrelated processes are spread across sockets. More precisely, different processes may have different preferred sockets (lgroups). Beware that I've simplified and elided details for the purposes of explication. The truth is in the code. Remarks: It's worth noting that initial thread placement is just that. If there's a gross imbalance between the load on different nodes then the kernel will migrate threads to achieve a better and more even distribution over the set of available nodes. Once a thread runs and gains some affinity for a node, however, it becomes "stickier" under the assumption that the thread has residual cache residency on that node, and that memory allocated by that thread resides on that node given the default "first-touch" page-level NUMA allocation policy. Exactly how the various policies interact and which have precedence under what circumstances could the topic of a future blog entry. The scheduler is work-conserving. The x4800 mentioned above is an interesting system. Each of the 8 sockets houses an Intel 7500-series processor. Each processor has 3 coherent QPI links and the system is arranged as a glueless 8-socket twisted ladder "mobius" topology. Nodes are either 1 or 2 hops distant over the QPI links. As an aside the mapping of logical CPUIDs to physical resources is rather interesting on Solaris/x4800. On SPARC/Solaris the CPUID layout is strictly geographic, with the highest order bits identifying the socket, the next lower bits identifying the core within that socket, following by the pipeline (if present) and finally the logical thread context ("strand") on the core. But on Solaris on the x4800 the CPUID layout is as follows. [6:6] identifies the hyperthread on a core; bits [5:3] identify the socket, or package in Intel terminology; bits [2:0] identify the core within a socket. Such low-level details should be of interest only if you're binding threads -- a bad idea, the kernel typically handles placement best -- or if you're writing NUMA-aware code that's aware of the ambient placement and makes decisions accordingly. Solaris introduced the so-called critical-threads mechanism, which is expressed by putting a thread into the FX scheduling class at priority 60. The critical-threads mechanism applies to placement on cores, not on sockets, however. That is, it's an intra-socket policy, not an inter-socket policy. Solaris 11 introduces the Power Aware Dispatcher (PAD) which packs threads instead of spreading them out in an attempt to be able to keep sockets or cores at lower power levels. Maximum dispersal may be good for performance but is anathema to power management. PAD is off by default, but power management polices constitute yet another confounding factor with respect to scheduling and dispatching. If your threads communicate heavily -- one thread reads cache lines last written by some other thread -- then the new dense packing policy may improve performance by reducing traffic on the coherent interconnect. On the other hand if your threads in your process communicate rarely, then it's possible the new packing policy might result on contention on shared computing resources. Unfortunately there's no simple litmus test that says whether packing or spreading is optimal in a given situation. The answer varies by system load, application, number of threads, and platform hardware characteristics. Currently we don't have the necessary tools and sensoria to decide at runtime, so we're reduced to an empirical approach where we run trials and try to decide on a placement policy. The situation is quite frustrating. Relatedly, it's often hard to determine just the right level of concurrency to optimize throughput. (Understanding constructive vs destructive interference in the shared caches would be a good start. We could augment the lines with a small tag field indicating which strand last installed or accessed a line. Given that, we could augment the CPU with performance counters for misses where a thread evicts a line it installed vs misses where a thread displaces a line installed by some other thread.)

    Read the article

  • Variables are "lost" somewhere, then reappear... all with no error thrown.

    - by rob - not a robber
    Hi All, I'm probably going to make myself look like a fool with my horrible scripting but here we go. I have a form that I am collecting a bunch of checkbox info from using a binary method. ON/SET=1 !ISSET=0 Anyway, all seems to be going as planned except for the query bit. When I run the script, it runs through and throws no errors, but it's not doing what I think I am telling it to. I've hard coded the values in the query and left them in the script and it DOES update the DB. I've also tried echoing all the needed variables after the script runs and exiting right after so I can audit them... and they're all there. Here's an example. ####FEATURES RECORD UPDATE ### HERE I DECIDE TO RUN THE SCRIPT BASED ON WHETHER AN IMAGE BUTTON WAS USED if (isset($_POST["button_x"])) { ### HERE I AM ASSIGNING 1 OR 0 TO A VAR BASED ON WHTER THE CHECKBOX WAS SET if (isset($_POST["pool"])) $pool=1; if (!isset($_POST["pool"])) $pool=0; if (isset($_POST["darts"])) $darts=1; if (!isset($_POST["darts"])) $darts=0; if (isset($_POST["karaoke"])) $karaoke=1; if (!isset($_POST["karaoke"])) $karaoke=0; if (isset($_POST["trivia"])) $trivia=1; if (!isset($_POST["trivia"])) $trivia=0; if (isset($_POST["wii"])) $wii=1; if (!isset($_POST["wii"])) $wii=0; if (isset($_POST["guitarhero"])) $guitarhero=1; if (!isset($_POST["guitarhero"])) $guitarhero=0; if (isset($_POST["megatouch"])) $megatouch=1; if (!isset($_POST["megatouch"])) $megatouch=0; if (isset($_POST["arcade"])) $arcade=1; if (!isset($_POST["arcade"])) $arcade=0; if (isset($_POST["jukebox"])) $jukebox=1; if (!isset($_POST["jukebox"])) $jukebox=0; if (isset($_POST["dancefloor"])) $dancefloor=1; if (!isset($_POST["dancefloor"])) $dancefloor=0; ### I'VE DONE LOADS OF PERMUTATIONS HERE... HARD SET THE 1/0 VARS AND LEFT THE $estab_id TO BE PICKED UP. SET THE $estab_id AND LEFT THE COLUMN DATA TO BE PICKED UP. ALL NO GOOD. IT _DOES_ WORK IF I HARD SET ALL VARS THOUGH mysql_query("UPDATE thedatabase SET pool_table='$pool', darts='$darts', karoke='$karaoke', trivia='$trivia', wii='$wii', megatouch='$megatouch', guitar_hero='$guitarhero', arcade_games='$arcade', dancefloor='$dancefloor' WHERE establishment_id='22'"); ###WEIRD THING HERE IS IF I ECHO THE VARS AT THIS POINT AND THEN EXIT(); they all show up as intended. header("location:theadminfilething.php"); exit(); THANKS ALL!!!

    Read the article

  • HTML Purifier: Removing an element conditionally based on its attributes

    - by pinkgothic
    As per the HTML Purifier smoketest, 'malformed' URIs are occasionally discarded to leave behind an attribute-less anchor tag, e.g. <a href="javascript:document.location='http://www.google.com/'">XSS</a> becomes <a>XSS</a> ...as well as occasionally being stripped down to the protocol, e.g. <a href="http://1113982867/">XSS</a> becomes <a href="http:/">XSS</a> While that's unproblematic, per se, it's a bit ugly. Instead of trying to strip these out with regular expressions, I was hoping to use HTML Purifier's own library capabilities / injectors / plug-ins / whathaveyou. Point of reference: Handling attributes Conditionally removing an attribute in HTMLPurifier is easy. Here the library offers the class HTMLPurifier_AttrTransform with the method confiscateAttr(). While I don't personally use the functionality of confiscateAttr(), I do use an HTMLPurifier_AttrTransform as per this thread to add target="_blank" to all anchors. // more configuration stuff up here $htmlDef = $htmlPurifierConfiguration->getHTMLDefinition(true); $anchor = $htmlDef->addBlankElement('a'); $anchor->attr_transform_post[] = new HTMLPurifier_AttrTransform_Target(); // purify down here HTMLPurifier_AttrTransform_Target is a very simple class, of course. class HTMLPurifier_AttrTransform_Target extends HTMLPurifier_AttrTransform { public function transform($attr, $config, $context) { // I could call $this->confiscateAttr() here to throw away an // undesired attribute $attr['target'] = '_blank'; return $attr; } } That part works like a charm, naturally. Handling elements Perhaps I'm not squinting hard enough at HTMLPurifier_TagTransform, or am looking in the wrong place(s), or generally amn't understanding it, but I can't seem to figure out a way to conditionally remove elements. Say, something to the effect of: // more configuration stuff up here $htmlDef = $htmlPurifierConfiguration->getHTMLDefinition(true); $anchor = $htmlDef->addElementHandler('a'); $anchor->elem_transform_post[] = new HTMLPurifier_ElementTransform_Cull(); // add target as per 'point of reference' here // purify down here With the Cull class extending something that has a confiscateElement() ability, or comparable, wherein I could check for a missing href attribute or a href attribute with the content http:/. HTMLPurifier_Filter I understand I could create a filter, but the examples (Youtube.php and ExtractStyleBlocks.php) suggest I'd be using regular expressions in that, which I'd really rather avoid, if it is at all possible. I'm hoping for an onboard or quasi-onboard solution that makes use of HTML Purifier's excellent parsing capabilities. Returning null in a child-class of HTMLPurifier_AttrTransform unfortunately doesn't cut it. Anyone have any smart ideas, or am I stuck with regexes? :)

    Read the article

  • C# - WebBrowser control seems to cache screenshots

    - by Justin
    Hey, I'm using the WebBrowser control in an ASP.NET MVC 2 app (don't judge, I'm doing it in an admin section only to be used by me), here's the code: public static class Screenshot { private static string _url; private static int _width; private static byte[] _bytes; public static byte[] Get(string url) { // This method gets a screenshot of the webpage // rendered at its full size (height and width) return Get(url, 50); } public static byte[] Get(string url, int width) { //set properties. _url = url; _width = width; //start screen scraper. var webBrowseThread = new Thread(new ThreadStart(TakeScreenshot)); webBrowseThread.SetApartmentState(ApartmentState.STA); webBrowseThread.Start(); //check every second if it got the screenshot yet. //i know, the thread sleep is terrible, but it's the secure section, don't judge... int numChecks = 20; for (int k = 0; k < numChecks; k++) { Thread.Sleep(1000); if (_bytes != null) { return _bytes; } } return null; } private static void TakeScreenshot() { try { //load the webpage into a WebBrowser control. using (WebBrowser wb = new WebBrowser()) { wb.ScrollBarsEnabled = false; wb.ScriptErrorsSuppressed = true; wb.Navigate(_url); while (wb.ReadyState != WebBrowserReadyState.Complete) { Application.DoEvents(); } //set the size of the WebBrowser control. //take Screenshot of the web pages full width. wb.Width = wb.Document.Body.ScrollRectangle.Width; //take Screenshot of the web pages full height. wb.Height = wb.Document.Body.ScrollRectangle.Height; //get a Bitmap representation of the webpage as it's rendered in the WebBrowser control. var bitmap = new Bitmap(wb.Width, wb.Height); wb.DrawToBitmap(bitmap, new Rectangle(0, 0, wb.Width, wb.Height)); //resize. var height = _width * (bitmap.Height / bitmap.Width); var thumbnail = bitmap.GetThumbnailImage(_width, height, null, IntPtr.Zero); //convert to byte array. var ms = new MemoryStream(); thumbnail.Save(ms, System.Drawing.Imaging.ImageFormat.Jpeg); _bytes = ms.ToArray(); } } catch(Exception exc) {//TODO: why did screenshot fail? string message = exc.Message; } } This works fine for the first screenshot that I take, however if I try to take subsequent screenshots of different URL's, it saves screenshots of the first url for the new url, or sometimes it'll save the screenshot from 3 or 4 url's ago. I'm creating a new instance of WebBrowser for each screenshot and am disposing of it properly with the "using" block, any idea why it's behaving this way? Thanks, Justin

    Read the article

  • The blocking nature of aggregates

    - by Rob Farley
    I wrote a post recently about how query tuning isn’t just about how quickly the query runs – that if you have something (such as SSIS) that is consuming your data (and probably introducing a bottleneck), then it might be more important to have a query which focuses on getting the first bit of data out. You can read that post here.  In particular, we looked at two operators that could be used to ensure that a query returns only Distinct rows. and The Sort operator pulls in all the data, sorts it (discarding duplicates), and then pushes out the remaining rows. The Hash Match operator performs a Hashing function on each row as it comes in, and then looks to see if it’s created a Hash it’s seen before. If not, it pushes the row out. The Sort method is quicker, but has to wait until it’s gathered all the data before it can do the sort, and therefore blocks the data flow. But that was my last post. This one’s a bit different. This post is going to look at how Aggregate functions work, which ties nicely into this month’s T-SQL Tuesday. I’ve frequently explained about the fact that DISTINCT and GROUP BY are essentially the same function, although DISTINCT is the poorer cousin because you have less control over it, and you can’t apply aggregate functions. Just like the operators used for Distinct, there are different flavours of Aggregate operators – coming in blocking and non-blocking varieties. The example I like to use to explain this is a pile of playing cards. If I’m handed a pile of cards and asked to count how many cards there are in each suit, it’s going to help if the cards are already ordered. Suppose I’m playing a game of Bridge, I can easily glance at my hand and count how many there are in each suit, because I keep the pile of cards in order. Moving from left to right, I could tell you I have four Hearts in my hand, even before I’ve got to the end. By telling you that I have four Hearts as soon as I know, I demonstrate the principle of a non-blocking operation. This is known as a Stream Aggregate operation. It requires input which is sorted by whichever columns the grouping is on, and it will release a row as soon as the group changes – when I encounter a Spade, I know I don’t have any more Hearts in my hand. Alternatively, if the pile of cards are not sorted, I won’t know how many Hearts I have until I’ve looked through all the cards. In fact, to count them, I basically need to put them into little piles, and when I’ve finished making all those piles, I can count how many there are in each. Because I don’t know any of the final numbers until I’ve seen all the cards, this is blocking. This performs the aggregate function using a Hash Match. Observant readers will remember this from my Distinct example. You might remember that my earlier Hash Match operation – used for Distinct Flow – wasn’t blocking. But this one is. They’re essentially doing a similar operation, applying a Hash function to some data and seeing if the set of values have been seen before, but before, it needs more information than the mere existence of a new set of values, it needs to consider how many of them there are. A lot is dependent here on whether the data coming out of the source is sorted or not, and this is largely determined by the indexes that are being used. If you look in the Properties of an Index Scan, you’ll be able to see whether the order of the data is required by the plan. A property called Ordered will demonstrate this. In this particular example, the second plan is significantly faster, but is dependent on having ordered data. In fact, if I force a Stream Aggregate on unordered data (which I’m doing by telling it to use a different index), a Sort operation is needed, which makes my plan a lot slower. This is all very straight-forward stuff, and information that most people are fully aware of. I’m sure you’ve all read my good friend Paul White (@sql_kiwi)’s post on how the Query Optimizer chooses which type of aggregate function to apply. But let’s take a look at SQL Server Integration Services. SSIS gives us a Aggregate transformation for use in Data Flow Tasks, but it’s described as Blocking. The definitive article on Performance Tuning SSIS uses Sort and Aggregate as examples of Blocking Transformations. I’ve just shown you that Aggregate operations used by the Query Optimizer are not always blocking, but that the SSIS Aggregate component is an example of a blocking transformation. But is it always the case? After all, there are plenty of SSIS Performance Tuning talks out there that describe the value of sorted data in Data Flow Tasks, describing the IsSorted property that can be set through the Advanced Editor of your Source component. And so I set about testing the Aggregate transformation in SSIS, to prove for sure whether providing Sorted data would let the Aggregate transform behave like a Stream Aggregate. (Of course, I knew the answer already, but it helps to be able to demonstrate these things). A query that will produce a million rows in order was in order. Let me rephrase. I used a query which produced the numbers from 1 to 1000000, in a single field, ordered. The IsSorted flag was set on the source output, with the only column as SortKey 1. Performing an Aggregate function over this (counting the number of rows per distinct number) should produce an additional column with 1 in it. If this were being done in T-SQL, the ordered data would allow a Stream Aggregate to be used. In fact, if the Query Optimizer saw that the field had a Unique Index on it, it would be able to skip the Aggregate function completely, and just insert the value 1. This is a shortcut I wouldn’t be expecting from SSIS, but certainly the Stream behaviour would be nice. Unfortunately, it’s not the case. As you can see from the screenshots above, the data is pouring into the Aggregate function, and not being released until all million rows have been seen. It’s not doing a Stream Aggregate at all. This is expected behaviour. (I put that in bold, because I want you to realise this.) An SSIS transformation is a piece of code that runs. It’s a physical operation. When you write T-SQL and ask for an aggregation to be done, it’s a logical operation. The physical operation is either a Stream Aggregate or a Hash Match. In SSIS, you’re telling the system that you want a generic Aggregation, that will have to work with whatever data is passed in. I’m not saying that it wouldn’t be possible to make a sometimes-blocking aggregation component in SSIS. A Custom Component could be created which could detect whether the SortKeys columns of the input matched the Grouping columns of the Aggregation, and either call the blocking code or the non-blocking code as appropriate. One day I’ll make one of those, and publish it on my blog. I’ve done it before with a Script Component, but as Script components are single-use, I was able to handle the data knowing everything about my data flow already. As per my previous post – there are a lot of aspects in which tuning SSIS and tuning execution plans use similar concepts. In both situations, it really helps to have a feel for what’s going on behind the scenes. Considering whether an operation is blocking or not is extremely relevant to performance, and that it’s not always obvious from the surface. In a future post, I’ll show the impact of blocking v non-blocking and synchronous v asynchronous components in SSIS, using some of LobsterPot’s Script Components and Custom Components as examples. When I get that sorted, I’ll make a Stream Aggregate component available for download.

    Read the article

  • The blocking nature of aggregates

    - by Rob Farley
    I wrote a post recently about how query tuning isn’t just about how quickly the query runs – that if you have something (such as SSIS) that is consuming your data (and probably introducing a bottleneck), then it might be more important to have a query which focuses on getting the first bit of data out. You can read that post here.  In particular, we looked at two operators that could be used to ensure that a query returns only Distinct rows. and The Sort operator pulls in all the data, sorts it (discarding duplicates), and then pushes out the remaining rows. The Hash Match operator performs a Hashing function on each row as it comes in, and then looks to see if it’s created a Hash it’s seen before. If not, it pushes the row out. The Sort method is quicker, but has to wait until it’s gathered all the data before it can do the sort, and therefore blocks the data flow. But that was my last post. This one’s a bit different. This post is going to look at how Aggregate functions work, which ties nicely into this month’s T-SQL Tuesday. I’ve frequently explained about the fact that DISTINCT and GROUP BY are essentially the same function, although DISTINCT is the poorer cousin because you have less control over it, and you can’t apply aggregate functions. Just like the operators used for Distinct, there are different flavours of Aggregate operators – coming in blocking and non-blocking varieties. The example I like to use to explain this is a pile of playing cards. If I’m handed a pile of cards and asked to count how many cards there are in each suit, it’s going to help if the cards are already ordered. Suppose I’m playing a game of Bridge, I can easily glance at my hand and count how many there are in each suit, because I keep the pile of cards in order. Moving from left to right, I could tell you I have four Hearts in my hand, even before I’ve got to the end. By telling you that I have four Hearts as soon as I know, I demonstrate the principle of a non-blocking operation. This is known as a Stream Aggregate operation. It requires input which is sorted by whichever columns the grouping is on, and it will release a row as soon as the group changes – when I encounter a Spade, I know I don’t have any more Hearts in my hand. Alternatively, if the pile of cards are not sorted, I won’t know how many Hearts I have until I’ve looked through all the cards. In fact, to count them, I basically need to put them into little piles, and when I’ve finished making all those piles, I can count how many there are in each. Because I don’t know any of the final numbers until I’ve seen all the cards, this is blocking. This performs the aggregate function using a Hash Match. Observant readers will remember this from my Distinct example. You might remember that my earlier Hash Match operation – used for Distinct Flow – wasn’t blocking. But this one is. They’re essentially doing a similar operation, applying a Hash function to some data and seeing if the set of values have been seen before, but before, it needs more information than the mere existence of a new set of values, it needs to consider how many of them there are. A lot is dependent here on whether the data coming out of the source is sorted or not, and this is largely determined by the indexes that are being used. If you look in the Properties of an Index Scan, you’ll be able to see whether the order of the data is required by the plan. A property called Ordered will demonstrate this. In this particular example, the second plan is significantly faster, but is dependent on having ordered data. In fact, if I force a Stream Aggregate on unordered data (which I’m doing by telling it to use a different index), a Sort operation is needed, which makes my plan a lot slower. This is all very straight-forward stuff, and information that most people are fully aware of. I’m sure you’ve all read my good friend Paul White (@sql_kiwi)’s post on how the Query Optimizer chooses which type of aggregate function to apply. But let’s take a look at SQL Server Integration Services. SSIS gives us a Aggregate transformation for use in Data Flow Tasks, but it’s described as Blocking. The definitive article on Performance Tuning SSIS uses Sort and Aggregate as examples of Blocking Transformations. I’ve just shown you that Aggregate operations used by the Query Optimizer are not always blocking, but that the SSIS Aggregate component is an example of a blocking transformation. But is it always the case? After all, there are plenty of SSIS Performance Tuning talks out there that describe the value of sorted data in Data Flow Tasks, describing the IsSorted property that can be set through the Advanced Editor of your Source component. And so I set about testing the Aggregate transformation in SSIS, to prove for sure whether providing Sorted data would let the Aggregate transform behave like a Stream Aggregate. (Of course, I knew the answer already, but it helps to be able to demonstrate these things). A query that will produce a million rows in order was in order. Let me rephrase. I used a query which produced the numbers from 1 to 1000000, in a single field, ordered. The IsSorted flag was set on the source output, with the only column as SortKey 1. Performing an Aggregate function over this (counting the number of rows per distinct number) should produce an additional column with 1 in it. If this were being done in T-SQL, the ordered data would allow a Stream Aggregate to be used. In fact, if the Query Optimizer saw that the field had a Unique Index on it, it would be able to skip the Aggregate function completely, and just insert the value 1. This is a shortcut I wouldn’t be expecting from SSIS, but certainly the Stream behaviour would be nice. Unfortunately, it’s not the case. As you can see from the screenshots above, the data is pouring into the Aggregate function, and not being released until all million rows have been seen. It’s not doing a Stream Aggregate at all. This is expected behaviour. (I put that in bold, because I want you to realise this.) An SSIS transformation is a piece of code that runs. It’s a physical operation. When you write T-SQL and ask for an aggregation to be done, it’s a logical operation. The physical operation is either a Stream Aggregate or a Hash Match. In SSIS, you’re telling the system that you want a generic Aggregation, that will have to work with whatever data is passed in. I’m not saying that it wouldn’t be possible to make a sometimes-blocking aggregation component in SSIS. A Custom Component could be created which could detect whether the SortKeys columns of the input matched the Grouping columns of the Aggregation, and either call the blocking code or the non-blocking code as appropriate. One day I’ll make one of those, and publish it on my blog. I’ve done it before with a Script Component, but as Script components are single-use, I was able to handle the data knowing everything about my data flow already. As per my previous post – there are a lot of aspects in which tuning SSIS and tuning execution plans use similar concepts. In both situations, it really helps to have a feel for what’s going on behind the scenes. Considering whether an operation is blocking or not is extremely relevant to performance, and that it’s not always obvious from the surface. In a future post, I’ll show the impact of blocking v non-blocking and synchronous v asynchronous components in SSIS, using some of LobsterPot’s Script Components and Custom Components as examples. When I get that sorted, I’ll make a Stream Aggregate component available for download.

    Read the article

  • getView() (for a Custom ListView ) doesn't get called on notifyDatasetChanged()

    - by hungson175
    Hi everyone, I have the following problem, and searched for a while but haven't got any solution from the net: I have a custom list view, each item has the following layout (I just post the essential): <LinearLayout> <ImageView android:id="@+id/friendlist_iv_avatar" /> <TextView andorid:id="@+id/friendlist_tv_nick_name" /> <ImageView android:id="@+id/friendlist_iv_status_icon" /> </LinearLayout> And I have a class FriendRowItem, which is inflated from the above layout: public class FriendRowItem extends LinearLayout{ private ImageView ivAvatar; private ImageView ivStatusIcon; private TextView tvNickName; public FriendRowItem(Context context) { super(context); RelativeLayout friendRow = (RelativeLayout) Helpers.inflate(context, R.layout.friendlist_row); this.addView(friendRow); ivAvatar = (ImageView)findViewById(R.id.friendlist_iv_avatar); ivStatusIcon = (ImageView)findViewById(R.id.friendlist_iv_status_icon); tvNickName = (TextView)findViewById(R.id.friendlist_tv_nick_name); } public void setPropeties(Friend friend) { //Avatar ivAvatar.setImageResource(friend.getAvatar().getDrawableResourceId()); //Status Status.Type status = friend.getStatusType(); if ( status == Type.ONLINE) { ivStatusIcon.setImageResource(R.drawable.online_icon); } else { ivStatusIcon.setImageResource(R.drawable.offline_icon); } //Nickname String name = friend.getChatID(); if ( friend.hasName()) { name = friend.getName(); } tvNickName.setText(name); } } In the main activity, I have a custom listview: lvMainListView, with an custom adapter (whose class extends ArrayAdapter - and off course: override the method getView ), the data set of the adapter is: ArrayList<Friend> friends: private class FriendRowAdapter extends ArrayAdapter<Friend> { public FriendRowAdapter(Context applicationContext, int friendlistRow, ArrayList<Friend> friends) { super(applicationContext, friendlistRow, friends); } @Override public View getView(int position,View convertView,ViewGroup parent) { Friend friend = getItem(position); FriendRowItem row = (FriendRowItem) convertView; if ( row == null ) { row = new FriendRowItem(ShowFriendsList.this.getApplicationContext()); } row.setPropeties( friend ); return row; } } the problem is when I change the status of a friend from OFFLINE to ONLINE, then call notifyDataSetChanged(), nothing happens : the status icon of that friend doesn't change. I tried debugging, and saw the code: notifyDataSetChanged() get called, but the custom getView() is not fired ! Can you please tell me, that is normal in Android, or did I do something wrong ? (I am using Android 1.5). Thank you in advance, Son

    Read the article

  • Eclipse: How to convert a web project into an AspectJ project and weave and run it using the AJDT pl

    - by Kent
    What I want to do: I want to use the @Configured annotation with Spring. It requires AspectJ to be enabled. I thought that using the AJDT plugin for compile time weaving would solve this problem. Before installing the plug in the dependencies which were supposed to be injected into my @Configured object remained null. What I have done: Installed the AJDT: AspectJ Development Tools plug in for Eclipse 3.4. Right clicked on my web project and converted it into a AspectJ project. Enabled compile time weaving. What doesn't work: When I start the Tomcat 6 server now, I get an exception*. Other information: I haven't configured anything in the AspectJ Build and AspectJ Compiler parts of the project properties. JDT Weaving under Preferences says weaving is enabled. I still have Java build path and Java Compiler under project properties. And they look like I previously configured them (while the above two new entries are not configured). The icon of my @Configured object file looks like any other file (i.e. no indication of any aspect or such, which I think there should be). The file name is MailNotification.java (and not .aj), but I guess it should still work as I'm using a Spring annotation for AspectJ? I haven't found any tutorial or similar which teaches: How to turn a Spring web application project into an AspectJ project and weave aspects into the files using the AJDT plugin, all within Eclipse 3.4. If there is anything like that out there I would be very interested in knowing about it. What I would like to know: Where to go from here? I just want to use the @Configured annotation of Spring. I'm also using @Transactional which I think also needs AspectJ. If it is possible I would like to study AspectJ as little as possible as long as my needs are met. The subject seems interesting, but huge, all I want to do is use the above two mentioned Spring annotations. *** Exception when Tomcat 6 is started: Caused by: java.lang.IllegalStateException: ClassLoader [org.apache.catalina.loader.WebappClassLoader] does NOT provide an 'addTransformer(ClassFileTransformer)' method. Specify a custom LoadTimeWeaver or start your Java virtual machine with Spring's agent: -javaagent:spring-agent.jar at org.springframework.context.weaving.DefaultContextLoadTimeWeaver.setBeanClassLoader(DefaultContextLoadTimeWeaver.java:82) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1322) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:473) ... 41 more

    Read the article

  • A New Threat To Web Applications: Connection String Parameter Pollution (CSPP)

    - by eric.maurice
    Hi, this is Shaomin Wang. I am a security analyst in Oracle's Security Alerts Group. My primary responsibility is to evaluate the security vulnerabilities reported externally by security researchers on Oracle Fusion Middleware and to ensure timely resolution through the Critical Patch Update. Today, I am going to talk about a serious type of attack: Connection String Parameter Pollution (CSPP). Earlier this year, at the Black Hat DC 2010 Conference, two Spanish security researchers, Jose Palazon and Chema Alonso, unveiled a new class of security vulnerabilities, which target insecure dynamic connections between web applications and databases. The attack called Connection String Parameter Pollution (CSPP) exploits specifically the semicolon delimited database connection strings that are constructed dynamically based on the user inputs from web applications. CSPP, if carried out successfully, can be used to steal user identities and hijack web credentials. CSPP is a high risk attack because of the relative ease with which it can be carried out (low access complexity) and the potential results it can have (high impact). In today's blog, we are going to first look at what connection strings are and then review the different ways connection string injections can be leveraged by malicious hackers. We will then discuss how CSPP differs from traditional connection string injection, and the measures organizations can take to prevent this kind of attacks. In web applications, a connection string is a set of values that specifies information to connect to backend data repositories, in most cases, databases. The connection string is passed to a provider or driver to initiate a connection. Vendors or manufacturers write their own providers for different databases. Since there are many different providers and each provider has multiple ways to make a connection, there are many different ways to write a connection string. Here are some examples of connection strings from Oracle Data Provider for .Net/ODP.Net: Oracle Data Provider for .Net / ODP.Net; Manufacturer: Oracle; Type: .NET Framework Class Library: - Using TNS Data Source = orcl; User ID = myUsername; Password = myPassword; - Using integrated security Data Source = orcl; Integrated Security = SSPI; - Using the Easy Connect Naming Method Data Source = username/password@//myserver:1521/my.server.com - Specifying Pooling parameters Data Source=myOracleDB; User Id=myUsername; Password=myPassword; Min Pool Size=10; Connection Lifetime=120; Connection Timeout=60; Incr Pool Size=5; Decr Pool Size=2; There are many variations of the connection strings, but the majority of connection strings are key value pairs delimited by semicolons. Attacks on connection strings are not new (see for example, this SANS White Paper on Securing SQL Connection String). Connection strings are vulnerable to injection attacks when dynamic string concatenation is used to build connection strings based on user input. When the user input is not validated or filtered, and malicious text or characters are not properly escaped, an attacker can potentially access sensitive data or resources. For a number of years now, vendors, including Oracle, have created connection string builder class tools to help developers generate valid connection strings and potentially prevent this kind of vulnerability. Unfortunately, not all application developers use these utilities because they are not aware of the danger posed by this kind of attacks. So how are Connection String parameter Pollution (CSPP) attacks different from traditional Connection String Injection attacks? First, let's look at what parameter pollution attacks are. Parameter pollution is a technique, which typically involves appending repeating parameters to the request strings to attack the receiving end. Much of the public attention around parameter pollution was initiated as a result of a presentation on HTTP Parameter Pollution attacks by Stefano Di Paola and Luca Carettoni delivered at the 2009 Appsec OWASP Conference in Poland. In HTTP Parameter Pollution attacks, an attacker submits additional parameters in HTTP GET/POST to a web application, and if these parameters have the same name as an existing parameter, the web application may react in different ways depends on how the web application and web server deal with multiple parameters with the same name. When applied to connections strings, the rule for the majority of database providers is the "last one wins" algorithm. If a KEYWORD=VALUE pair occurs more than once in the connection string, the value associated with the LAST occurrence is used. This opens the door to some serious attacks. By way of example, in a web application, a user enters username and password; a subsequent connection string is generated to connect to the back end database. Data Source = myDataSource; Initial Catalog = db; Integrated Security = no; User ID = myUsername; Password = XXX; In the password field, if the attacker enters "xxx; Integrated Security = true", the connection string becomes, Data Source = myDataSource; Initial Catalog = db; Integrated Security = no; User ID = myUsername; Password = XXX; Intergrated Security = true; Under the "last one wins" principle, the web application will then try to connect to the database using the operating system account under which the application is running to bypass normal authentication. CSPP poses serious risks for unprepared organizations. It can be particularly dangerous if an Enterprise Systems Management web front-end is compromised, because attackers can then gain access to control panels to configure databases, systems accounts, etc. Fortunately, organizations can take steps to prevent this kind of attacks. CSPP falls into the Injection category of attacks like Cross Site Scripting or SQL Injection, which are made possible when inputs from users are not properly escaped or sanitized. Escaping is a technique used to ensure that characters (mostly from user inputs) are treated as data, not as characters, that is relevant to the interpreter's parser. Software developers need to become aware of the danger of these attacks and learn about the defenses mechanism they need to introduce in their code. As well, software vendors need to provide templates or classes to facilitate coding and eliminate developers' guesswork for protecting against such vulnerabilities. Oracle has introduced the OracleConnectionStringBuilder class in Oracle Data Provider for .NET. Using this class, developers can employ a configuration file to provide the connection string and/or dynamically set the values through key/value pairs. It makes creating connection strings less error-prone and easier to manager, and ultimately using the OracleConnectionStringBuilder class provides better security against injection into connection strings. For More Information: - The OracleConnectionStringBuilder is located at http://download.oracle.com/docs/cd/B28359_01/win.111/b28375/OracleConnectionStringBuilderClass.htm - Oracle has developed a publicly available course on preventing SQL Injections. The Server Technologies Curriculum course "Defending Against SQL Injection Attacks!" is located at http://st-curriculum.oracle.com/tutorial/SQLInjection/index.htm - The OWASP web site also provides a number of useful resources. It is located at http://www.owasp.org/index.php/Main_Page

    Read the article

  • SQL Server to PostgreSQL - Migration and design concerns

    - by youwhut
    Currently migrating from SQL Server to PostgreSQL and attempting to improve a couple of key areas on the way: I have an Articles table: CREATE TABLE [dbo].[Articles]( [server_ref] [int] NOT NULL, [article_ref] [int] NOT NULL, [article_title] [varchar](400) NOT NULL, [category_ref] [int] NOT NULL, [size] [bigint] NOT NULL ) Data (comma delimited text files) is dumped on the import server by ~500 (out of ~1000) servers on a daily basis. Importing: Indexes are disabled on the Articles table. For each dumped text file Data is BULK copied to a temporary table. Temporary table is updated. Old data for the server is dropped from the Articles table. Temporary table data is copied to Articles table. Temporary table dropped. Once this process is complete for all servers the indexes are built and the new database is copied to a web server. I am reasonably happy with this process but there is always room for improvement as I strive for a real-time (haha!) system. Is what I am doing correct? The Articles table contains ~500 million records and is expected to grow. Searching across this table is okay but could be better. i.e. SELECT * FROM Articles WHERE server_ref=33 AND article_title LIKE '%criteria%' has been satisfactory but I want to improve the speed of searching. Obviously the "LIKE" is my problem here. Suggestions? SELECT * FROM Articles WHERE article_title LIKE '%criteria%' is horrendous. Partitioning is a feature of SQL Server Enterprise but $$$ which is one of the many exciting prospects of PostgreSQL. What performance hit will be incurred for the import process (drop data, insert data) and building indexes? Will the database grow by a huge amount? The database currently stands at 200 GB and will grow. Copying this across the network is not ideal but it works. I am putting thought into changing the hardware structure of the system. The thought process of having an import server and a web server is so that the import server can do the dirty work (WITHOUT indexes) while the web server (WITH indexes) can present reports. Maybe reducing the system down to one server would work to skip the copying across the network stage. This one server would have two versions of the database: one with the indexes for delivering reports and the other without for importing new data. The databases would swap daily. Thoughts? This is a fantastic system, and believe it or not there is some method to my madness by giving it a big shake up. UPDATE: I am not looking for help with relational databases, but hoping to bounce ideas around with data warehouse experts.

    Read the article

  • Storing a set of criteria in another table

    - by bendataclear
    I have a large table with sales data, useful data below: RowID Date Customer Salesperson Product_Type Manufacturer Quantity Value 1 01-06-2004 James Ian Taps Tap Ltd 200 £850 2 02-06-2004 Apple Fran Hats Hats Inc 30 £350 3 04-06-2004 James Lawrence Pencils ABC Ltd 2000 £980 ... Many rows later... ... 185352 03-09-2012 Apple Ian Washers Tap Ltd 600 £80 I need to calculate a large set of targets from table containing values different types, target table is under my control and so far is like: TargetID Year Month Salesperson Target_Type Quantity 1 2012 7 Ian 1 6000 2 2012 8 James 2 2000 3 2012 9 Ian 2 6500 At present I am working out target types using a view of the first table which has a lot of extra columns: SELECT YEAR(Date) , MONTH(Date) , Salesperson , Quantity , CASE WHEN Manufacturer IN ('Tap Ltd','Hats Inc') AND Product_Type = 'Hats' THEN True ELSE False END AS IsType1 , CASE WHEN Manufacturer = 'Hats Inc' AND Product_Type IN ('Hats','Coats') THEN True ELSE False END AS IsType2 ... ... , CASE WHEN Manufacturer IN ('Tap Ltd','Hats Inc') AND Product_Type = 'Hats' THEN True ELSE False END AS IsType24 , CASE WHEN Manufacturer IN ('Tap Ltd','Hats Inc') AND Product_Type = 'Hats' THEN True ELSE False END AS IsType25 FROM SalesTable WHERE [some stuff here] This is horrible to read/debug and I hate it!! I've tried a few different ways of simplifying this but have been unable to get it to work. The closest I have come is to have a third table holding the definition of the types with the values for each field and the type number, this can be joined to the tables to give me the full values but I can't work out a way to cope with multiple values for each field. Finally the question: Is there a standard way this can be done or an easier/neater method other than one column for each type of target? I know this is a complex problem so if anything is unclear please let me know. Edit - What I need to get: At the very end of the process I need to have targets displayed with actual sales: Type Year Month Salesperson TargetQty ActualQty 2 2012 8 James 2000 2809 2 2012 9 Ian 6500 6251 Each row of the sales table could potentially satisfy 8 of the types. Some more points: I have 5 different columns that need to be defined against the targets (or set to NULL to include any value) I have between 30 and 40 different types that need to be defined, several of the columns could contain as many as 10 different values For point 2, if I am using a row for each permutation of values, 2 columns with 10 values each would give me 100 rows for each sales person for each month which is a lot but if this is the only way to define multiple values I will have to do this. Sorry if this makes no sense!

    Read the article

  • BindException with INTERNET permission requested

    - by Mondain
    I have seen several questions regarding SocketException when using Android, but none of them cover the BindException that I get even with the INTERNET permission specified in my manifest. Here is part of my manifest: <uses-permission android:name="android.permission.INTERNET"></uses-permission> <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE"></uses-permission> <uses-permission android:name="android.permission.ACCESS_WIFI_STATE"></uses-permission> <uses-permission android:name="android.permission.READ_OWNER_DATA"></uses-permission> <uses-permission android:name="android.permission.READ_PHONE_STATE"></uses-permission> <uses-permission android:name="android.permission.ACCOUNT_MANAGER"></uses-permission> <uses-permission android:name="android.permission.AUTHENTICATE_ACCOUNTS"></uses-permission> Here is the relevant portion of my LogCat output: 04-22 14:49:06.117: DEBUG/MyLibrary(4844): Address to bind: 192.168.1.14 port: 843 04-22 14:49:06.197: WARN/System.err(4844): java.net.BindException: Permission denied (maybe missing INTERNET permission) 04-22 14:49:06.207: WARN/System.err(4844): at org.apache.harmony.luni.platform.OSNetworkSystem.socketBindImpl(Native Method) 04-22 14:49:06.207: WARN/System.err(4844): at org.apache.harmony.luni.platform.OSNetworkSystem.bind(OSNetworkSystem.java:107) 04-22 14:49:06.217: WARN/System.err(4844): at org.apache.harmony.luni.net.PlainSocketImpl.bind(PlainSocketImpl.java:184) 04-22 14:49:06.217: WARN/System.err(4844): at java.net.ServerSocket.bind(ServerSocket.java:414) 04-22 14:49:06.227: WARN/System.err(4844): at org.apache.harmony.nio.internal.ServerSocketChannelImpl$ServerSocketAdapter.bind(ServerSocketChannelImpl.java:213) 04-22 14:49:06.227: WARN/System.err(4844): at java.net.ServerSocket.bind(ServerSocket.java:367) 04-22 14:49:06.237: WARN/System.err(4844): at org.apache.harmony.nio.internal.ServerSocketChannelImpl$ServerSocketAdapter.bind(ServerSocketChannelImpl.java:283) 04-22 14:49:06.237: WARN/System.err(4844): at mylibrary.net.PolicyConnection$PolicyServerWorker.(PolicyConnection.java:201) I Really hope this is a simple problem and not something complicated by the fact that the binding is occurring within a worker thread on a port less than 1024. Update Looks as if this is a privileged port issue, anyone know how to bind to ports lower than 1024 in Android? SelectorProvider provider = SelectorProvider.provider(); try { ServerSocketChannel channel = provider.openServerSocketChannel(); policySocket = channel.socket(); Log.d("MyLibrary", "Address to bind: " + device.getAddress().getAddress() + " port: 843"); InetSocketAddress addr = new InetSocketAddress(InetAddress.getByName(device.getAddress().getAddress()), 843); policySocket.bind(addr); policySocket.setReuseAddress(true); policySocket.setReceiveBufferSize(256); } catch (Exception e) { e.printStackTrace(); }

    Read the article

  • SSIS: Deploying OLAP cubes using C# script tasks and AMO

    - by DrJohn
    As part of the continuing series on Building dynamic OLAP data marts on-the-fly, this blog entry will focus on how to automate the deployment of OLAP cubes using SQL Server Integration Services (SSIS) and Analysis Services Management Objects (AMO). OLAP cube deployment is usually done using the Analysis Services Deployment Wizard. However, this option was dismissed for a variety of reasons. Firstly, invoking external processes from SSIS is fraught with problems as (a) it is not always possible to ensure SSIS waits for the external program to terminate; (b) we cannot log the outcome properly and (c) it is not always possible to control the server's configuration to ensure the executable works correctly. Another reason for rejecting the Deployment Wizard is that it requires the 'answers' to be written into four XML files. These XML files record the three things we need to change: the name of the server, the name of the OLAP database and the connection string to the data mart. Although it would be reasonably straight forward to change the content of the XML files programmatically, this adds another set of complication and level of obscurity to the overall process. When I first investigated the possibility of using C# to deploy a cube, I was surprised to find that there are no other blog entries about the topic. I can only assume everyone else is happy with the Deployment Wizard! SSIS "forgets" assembly references If you build your script task from scratch, you will have to remember how to overcome one of the major annoyances of working with SSIS script tasks: the forgetful nature of SSIS when it comes to assembly references. Basically, you can go through the process of adding an assembly reference using the Add Reference dialog, but when you close the script window, SSIS "forgets" the assembly reference so the script will not compile. After repeating the operation several times, you will find that SSIS only remembers the assembly reference when you specifically press the Save All icon in the script window. This problem is not unique to the AMO assembly and has certainly been a "feature" since SQL Server 2005, so I am not amazed it is still present in SQL Server 2008 R2! Sample Package So let's take a look at the sample SSIS package I have provided which can be downloaded from here: DeployOlapCubeExample.zip  Below is a screenshot after a successful run. Connection Managers The package has three connection managers: AsDatabaseDefinitionFile is a file connection manager pointing to the .asdatabase file you wish to deploy. Note that this can be found in the bin directory of you OLAP database project once you have clicked the "Build" button in Visual Studio TargetOlapServerCS is an Analysis Services connection manager which identifies both the deployment server and the target database name. SourceDataMart is an OLEDB connection manager pointing to the data mart which is to act as the source of data for your cube. This will be used to replace the connection string found in your .asdatabase file Once you have configured the connection managers, the sample should run and deploy your OLAP database in a few seconds. Of course, in a production environment, these connection managers would be associated with package configurations or set at runtime. When you run the sample, you should see that the script logs its activity to the output screen (see screenshot above). If you configure logging for the package, then these messages will also appear in your SSIS logging. Sample Code Walkthrough Next let's walk through the code. The first step is to parse the connection string provided by the TargetOlapServerCS connection manager and obtain the name of both the target OLAP server and also the name of the OLAP database. Note that the target database does not have to exist to be referenced in an AS connection manager, so I am using this as a convenient way to define both properties. We now connect to the server and check for the existence of the OLAP database. If it exists, we drop the database so we can re-deploy. svr.Connect(olapServerName); if (svr.Connected) { // Drop the OLAP database if it already exists Database db = svr.Databases.FindByName(olapDatabaseName); if (db != null) { db.Drop(); } // rest of script } Next we start building the XMLA command that will actually perform the deployment. Basically this is a small chuck of XML which we need to wrap around the large .asdatabase file generated by the Visual Studio build process. // Start generating the main part of the XMLA command XmlDocument xmlaCommand = new XmlDocument(); xmlaCommand.LoadXml(string.Format("<Batch Transaction='false' xmlns='http://schemas.microsoft.com/analysisservices/2003/engine'><Alter AllowCreate='true' ObjectExpansion='ExpandFull'><Object><DatabaseID>{0}</DatabaseID></Object><ObjectDefinition/></Alter></Batch>", olapDatabaseName));  Next we need to merge two XML files which we can do by simply using setting the InnerXml property of the ObjectDefinition node as follows: // load OLAP Database definition from .asdatabase file identified by connection manager XmlDocument olapCubeDef = new XmlDocument(); olapCubeDef.Load(Dts.Connections["AsDatabaseDefinitionFile"].ConnectionString); // merge the two XML files by obtain a reference to the ObjectDefinition node oaRootNode.InnerXml = olapCubeDef.InnerXml;   One hurdle I had to overcome was removing detritus from the .asdabase file left by the Visual Studio build. Through an iterative process, I found I needed to remove several nodes as they caused the deployment to fail. The XMLA error message read "Cannot set read-only node: CreatedTimestamp" or similar. In comparing the XMLA generated with by the Deployment Wizard with that generated by my code, these read-only nodes were missing, so clearly I just needed to strip them out. This was easily achieved using XPath to find the relevant XML nodes, of which I show one example below: foreach (XmlNode node in rootNode.SelectNodes("//ns1:CreatedTimestamp", nsManager)) { node.ParentNode.RemoveChild(node); } Now we need to change the database name in both the ID and Name nodes using code such as: XmlNode databaseID = xmlaCommand.SelectSingleNode("//ns1:Database/ns1:ID", nsManager); if (databaseID != null) databaseID.InnerText = olapDatabaseName; Finally we need to change the connection string to point at the relevant data mart. Again this is easily achieved using XPath to search for the relevant nodes and then replace the content of the node with the new name or connection string. XmlNode connectionStringNode = xmlaCommand.SelectSingleNode("//ns1:DataSources/ns1:DataSource/ns1:ConnectionString", nsManager); if (connectionStringNode != null) { connectionStringNode.InnerText = Dts.Connections["SourceDataMart"].ConnectionString; } Finally we need to perform the deployment using the Execute XMLA command and check the returned XmlaResultCollection for errors before setting the Dts.TaskResult. XmlaResultCollection oResults = svr.Execute(xmlaCommand.InnerXml);  // check for errors during deployment foreach (Microsoft.AnalysisServices.XmlaResult oResult in oResults) { foreach (Microsoft.AnalysisServices.XmlaMessage oMessage in oResult.Messages) { if ((oMessage.GetType().Name == "XmlaError")) { FireError(oMessage.Description); HadError = true; } } } If you are not familiar with XML programming, all this may all seem a bit daunting, but perceiver as the sample code is pretty short. If you would like the script to process the OLAP database, simply uncomment the lines in the vicinity of Process method. Of course, you can extend the script to perform your own custom processing and to even synchronize the database to a front-end server. Personally, I like to keep the deployment and processing separate as the code can become overly complex for support staff.If you want to know more, come see my session at the forthcoming SQLBits conference.

    Read the article

  • jquery ajax, array and json

    - by sea_1987
    I am trying to log some input values into an array via jquery and then use those to run a method server side and get the data returned as JSON. The HTML looks like this, <div class="segment"> <div class="label"> <label>Choose region: </label> </div> <div class="column w190"> <div class="segment"> <div class="input"> <input type="checkbox" class="radio" value="Y" name="area[Nationwide]" id="inp_Nationwide"> </div> <div class="label "> <label for="inp_Nationwide">Nationwide</label> </div> <div class="s">&nbsp;</div> </div> </div> <div class="column w190"> <div class="segment"> <div class="input"> <input type="checkbox" class="radio" value="Y" name="area[Lancashire]" id="inp_Lancashire"> </div> <div class="label "> <label for="inp_Lancashire">Lancashire</label> </div> <div class="s">&nbsp;</div> </div> </div> <div class="column w190"> <div class="segment"> <div class="input"> <input type="checkbox" class="radio" value="Y" name="area[West_Yorkshire]" id="inp_West_Yorkshire"> </div> <div class="label "> <label for="inp_West_Yorkshire">West Yorkshire</label> </div> <div class="s">&nbsp;</div> </div> <div class="s">&nbsp;</div> </div> I have this javascript the detect whether the items are checked are not if($('input.radio:checked')){ } What I dont know is how to get the values of the input into an array so I can then send the information through AJAX to my controller. Can anyone help me?

    Read the article

  • rails paperclip unable to access image from another view

    - by curiousCoder
    my app has an habtm relation b/w listings and categories. now from the categories index page, a user filters select box to view listings in the show page. now i am not able to access images attached to listings in the category show page. listing.rb attr_accessible :placeholder, :categories_ids has_and_belongs_to_many :categories has_attached_file :placeholder, :styles => { :medium => "300x300>", :thumb => "100x100>" }, :default_url => "/images/:style/missing.png", :url => "/system/:hash.:extension", :hash_secret => "longSecretString" categories controller def index @categories = Category.all end def show @categories = Category.find_by_sql ["select distinct l.* from listings l , categories c, categories_listings cl where c.id = cl.category_id and l.id = cl.listing_id and c.id in (?,?)" , params[:c][:id1] , params[:c][:id2]] end the sql just filters and displays the listings in show page where i can show its attributes, but cant access the placeholder. note the plural @categories in show categories show page <ul> <% @categories.each_with_index do |c, index| %> <% if index == 0 %> <li class="first"><%= c.place %></li> <%= image_tag c.placeholder.url(:thumb) %> <li><%= c.price %></li> <% else %> <li><%= c.place %></li> <li><%= c.price %></li> <%= image_tag c.placeholder.url(:thumb) %> <% end %> <% end %> </ul> Access image from different view in a view with paperclip gem ruby on rails this said to make the object plural and call a loop, wch shall allow to access the image. it does not work in this case. undefined method `placeholder' for #<Category:0x5c78640> but the amazing thing is, placeholder will be displayed as an array of all images for all the listings if used as suggested in that stackoverflow, wch is, obviously, not the way i prefer. where's the issue? what am i missing?

    Read the article

  • read user Input of custom dialogs

    - by urobo
    I built a custom dialog starting from an AlertDialog to obtain login information from a user. So the dialog contains two EditText fields, using the layoutinflater service I obtain the layout and I'm saving a reference to the fields. LayoutInflater inflater = (LayoutInflater) Home.this.getSystemService(LAYOUT_INFLATER_SERVICE); layoutLogin = inflater.inflate(R.layout.login,(ViewGroup) findViewById(R.id.rl)); usernameInput =((EditText)findViewById(R.id.getNewUsername)); passwordInput = ((EditText)findViewById(R.id.getNewPassword)); Then I have my overridden onCreateDialog(...) : { AlertDialog d = null; AlertDialog.Builder builder = new AlertDialog.Builder(this); switch(id){ ... case Home.DIALOG_LOGIN: builder.setView(layoutLogin); builder.setMessage("Sign in to your DyCaPo Account").setCancelable(false); d=builder.create(); d.setTitle("Login"); Message msg = new Message(); msg.setTarget(Home.this.handleLogin); Bundle data = new Bundle(); data.putString("username", usernameInput.getText().toString());// <---null pointer Exception data.putString("password", passwordInput.getText().toString()); msg.setData(data); d.setButton(Dialog.BUTTON_NEUTRAL,"Sign in",msg); break; ... return d; } and the handler set in the Message: private Handler handleLogin= new Handler(){ /* (non-Javadoc) * @see android.os.Handler#handleMessage(android.os.Message) */ @Override public void handleMessage(Message msg) { // TODO Auto-generated method stub Log.d("Message Received", msg.getData().getString("username")+ msg.getData().getString("password")); } }; which for now works as a debugging tool. That's all. The Question is: what am I doing wrong? Because when I reach the line highlighted in the code ( the line in which I read the fields in the dialog ) I always get a null pointer exception. Could somebody please tell me the reason why it is so? And give some guidelines to work with dialogs. Thanks in advance!

    Read the article

  • Most efficient algorithm for merging sorted IEnumerable<T>

    - by franck
    Hello, I have several huge sorted enumerable sequences that I want to merge. Theses lists are manipulated as IEnumerable but are already sorted. Since input lists are sorted, it should be possible to merge them in one trip, without re-sorting anything. I would like to keep the defered execution behavior. I tried to write a naive algorithm which do that (see below). However, it looks pretty ugly and I'm sure it can be optimized. It may exist a more academical algorithm... IEnumerable<T> MergeOrderedLists<T, TOrder>(IEnumerable<IEnumerable<T>> orderedlists, Func<T, TOrder> orderBy) { var enumerators = orderedlists.ToDictionary(l => l.GetEnumerator(), l => default(T)); IEnumerator<T> tag = null; var firstRun = true; while (true) { var toRemove = new List<IEnumerator<T>>(); var toAdd = new List<KeyValuePair<IEnumerator<T>, T>>(); foreach (var pair in enumerators.Where(pair => firstRun || tag == pair.Key)) { if (pair.Key.MoveNext()) toAdd.Add(pair); else toRemove.Add(pair.Key); } foreach (var enumerator in toRemove) enumerators.Remove(enumerator); foreach (var pair in toAdd) enumerators[pair.Key] = pair.Key.Current; if (enumerators.Count == 0) yield break; var min = enumerators.OrderBy(t => orderBy(t.Value)).FirstOrDefault(); tag = min.Key; yield return min.Value; firstRun = false; } } The method can be used like that: // Person lists are already sorted by age MergeOrderedLists(orderedList, p => p.Age); assuming the following Person class exists somewhere: public class Person { public int Age { get; set; } } Duplicates should be conserved, we don't care about their order in the new sequence. Do you see any obvious optimization I could use?

    Read the article

  • Accessing SharePoint 2010 Data with REST/OData on Windows Phone 7

    - by Jan Tielens
    Consuming SharePoint 2010 data in Windows Phone 7 applications using the CTP version of the developer tools is quite a challenge. The issue is that the SharePoint 2010 data is not anonymously available; users need to authenticate to be able to access the data. When I first tried to access SharePoint 2010 data from my first Hello-World-type Windows Phone 7 application I thought “Hey, this should be easy!” because Windows Phone 7 development based on Silverlight and SharePoint 2010 has a Client Object Model for Silverlight. Unfortunately you can’t use the Client Object Model of SharePoint 2010 on the Windows Phone platform; there’s a reference to an assembly that’s not available (System.Windows.Browser). My second thought was “OK, no problem!” because SharePoint 2010 also exposes a REST/OData API to access SharePoint data. Using the REST API in SharePoint 2010 is as easy as making a web request for a URL (in which you specify the data you’d like to retrieve), e.g. http://yoursiteurl/_vti_bin/listdata.svc/Announcements. This is very easy to accomplish in a Silverlight application that’s running in the context of a page in a SharePoint site, because the credentials of the currently logged on user are automatically picked up and passed to the WCF service. But a Windows Phone application is of course running outside of the SharePoint site’s page, so the application should build credentials that have to be passed to SharePoint’s WCF service. This turns out to be a small challenge in Silverlight 3, the WebClient doesn’t support authentication; there is a Credentials property but when you set it and make the request you get a NotImplementedException exception. Probably this issued will be solved in the very near future, since Silverlight 4 does support authentication, and there’s already a WCF Data Services download that uses this new platform feature of Silverlight 4. So when Windows Phone platform switches to Silverlight 4, you can just use the WebClient to get the data. Even more, if the OData Client Library for Windows Phone 7 gets updated after that, things should get even easier! By the way: the things I’m writing in this paragraph are just assumptions that I make which make a lot of sense IMHO, I don’t have any info all of this will happen, but I really hope so. So are SharePoint developers out of the Windows Phone development game until they get this fixed? Well luckily not, when the HttpWebRequest class is being used instead, you can pass credentials! Using the HttpWebRequest class is slightly more complex than using the WebClient class, but the end result is that you have access to your precious SharePoint 2010 data. The following code snippet is getting all the announcements of an Annoucements list in a SharePoint site: HttpWebRequest webReq =     (HttpWebRequest)HttpWebRequest.Create("http://yoursite/_vti_bin/listdata.svc/Announcements");webReq.Credentials = new NetworkCredential("username", "password"); webReq.BeginGetResponse(    (result) => {        HttpWebRequest asyncReq = (HttpWebRequest)result.AsyncState;         XDocument xdoc = XDocument.Load(            ((HttpWebResponse)asyncReq.EndGetResponse(result)).GetResponseStream());         XNamespace ns = "http://www.w3.org/2005/Atom";        var items = from item in xdoc.Root.Elements(ns + "entry")                    select new { Title = item.Element(ns + "title").Value };         this.Dispatcher.BeginInvoke(() =>        {            foreach (var item in items)                MessageBox.Show(item.Title);        });    }, webReq); When you try this in a Windows Phone 7 application, make sure you add a reference to the System.Xml.Linq assembly, because the code uses Linq to XML to parse the resulting Atom feed, so the Title of every announcement is being displayed in a MessageBox. Check out my previous post if you’d like to see a more polished sample Windows Phone 7 application that displays SharePoint 2010 data.When you plan to use this technique, it’s of course a good idea to encapsulate the code doing the request, so it becomes really easy to get the data that you need. In the following code snippet you can find the GetAtomFeed method that gets the contents of any Atom feed, even if you need to authenticate to get access to the feed. delegate void GetAtomFeedCallback(Stream responseStream); public MainPage(){    InitializeComponent();     SupportedOrientations = SupportedPageOrientation.Portrait |         SupportedPageOrientation.Landscape;     string url = "http://yoursite/_vti_bin/listdata.svc/Announcements";    string username = "username";    string password = "password";    string domain = "";     GetAtomFeed(url, username, password, domain, (s) =>    {        XNamespace ns = "http://www.w3.org/2005/Atom";        XDocument xdoc = XDocument.Load(s);         var items = from item in xdoc.Root.Elements(ns + "entry")                    select new { Title = item.Element(ns + "title").Value };         this.Dispatcher.BeginInvoke(() =>        {            foreach (var item in items)            {                MessageBox.Show(item.Title);            }        });    });} private static void GetAtomFeed(string url, string username,     string password, string domain, GetAtomFeedCallback cb){    HttpWebRequest webReq = (HttpWebRequest)HttpWebRequest.Create(url);    webReq.Credentials = new NetworkCredential(username, password, domain);     webReq.BeginGetResponse(        (result) =>        {            HttpWebRequest asyncReq = (HttpWebRequest)result.AsyncState;            HttpWebResponse resp = (HttpWebResponse)asyncReq.EndGetResponse(result);            cb(resp.GetResponseStream());        }, webReq);}

    Read the article

  • ISQLQuery using AddJoin not loading related instance

    - by Remi Despres-Smyth
    I've got a problem using an ISQLQuery with an AddJoin. The entity I'm trying to return is RegionalFees, which has a composite-id which includes a Province instance. (This is the the instance being improperly loaded.) Here's the mapping: <class name="Project.RegionalFees, Project" table="tblRegionalFees"> <composite-id name="Id" class="Project.RegionalFeesId, project" unsaved-value="any" access="property"> <key-many-to-one class="Project.Province, Project" name="Region" access="property" column="provinceId" not-found="exception" /> <key-property name="StartDate" access="property" column="startDate" type="DateTime" /> </composite-id> <property name="SomeFee" column="someFee" type="Decimal" /> <property name="SomeOtherFee" column="someOtherFee" type="Decimal" /> <!-- Other unrelated stuff --> </class> <class name="Project.Province, Project" table="trefProvince" mutable="false"> <id name="Id" column="provinceId" type="Int64" unsaved-value="0"> <generator class="identity" /> </id> <property name="Code" column="code" access="nosetter.pascalcase-m-underscore" /> <property name="Label" column="label" access="nosetter.pascalcase-m-underscore" /> </class> Here's my query method: public IEnumerable<RegionalFees> GetRegionalFees() { // Using an ISQLQuery cause there doesn't appear to be an equivalent of // the SQL HAVING clause, which would be optimal for loading this set const String qryStr = "SELECT * " + "FROM tblRegionalFees INNER JOIN trefProvince " + "ON tblRegionalFees.provinceId=trefProvince.provinceId " + "WHERE EXISTS ( " + "SELECT provinceId, MAX(startDate) AS MostRecentFeesDate " + "FROM tblRegionalFees InnerRF " + "WHERE tblRegionalFees.provinceId=InnerRF.provinceId " + "AND startDate <= ? " + "GROUP BY provinceId " + "HAVING tblRegionalFees.startDate=MAX(startDate))"; var qry = NHibernateSessionManager.Instance.GetSession().CreateSQLQuery(qryStr); qry.SetDateTime(0, DateTime.Now); qry.AddEntity("RegFees", typeof(RegionalFees)); qry.AddJoin("Region", "RegFees.Id.Region"); return qry.List<RegionalFees>(); } The odd behavior here is that when I call GetRegionalFees (whose goal is to load just the most recent fee instances per region), it all loads fine if the Province instance is a proxy. If, however, Province is not loaded as a transparent proxy, the Province instance which is part of RegionalFees' RegionalFeesId property has null Code and Region values, although the Id value is set correctly. It looks to me like I have a problem in how I'm joining the Province class - since if it's lazy loaded the id is set from tblRegionalFees, and it gets loaded independently afterwards - but I haven't been able to figure out the solution.

    Read the article

  • Loading a ConfigurationSection with a required child ConfigurationElement with .Net configuration fr

    - by Vadim Rybak
    I have a console application that is trying to load a CustomConfigurationSection from a web.config file. The custom configuration section has a custom configuration element that is required. This means that when I load the config section, I expect to see an exception if that config element is not present in the config. The problem is that the .NET framework seems to be completely ignoring the isRequired attribute. So when I load the config section, I just creates an instance of the custom configuration element and sets it on the config section. My question is, why is this happening? I want the GetSection() method to fire a ConfigurationErrors exception since a required element is missing from the configuration. Here is how my config section looks. public class MyConfigSection : ConfigurationSection { [ConfigurationProperty("MyConfigElement", IsRequired = true)] public MyConfigElement MyElement { get { return (MyConfigElement) this["MyConfigElement"]; } } } public class MyConfigElement : ConfigurationElement { [ConfigurationProperty("MyAttribute", IsRequired = true)] public string MyAttribute { get { return this["MyAttribute"].ToString(); } } } Here is how I load the config section. class Program { public static Configuration OpenConfigFile(string configPath) { var configFile = new FileInfo(configPath); var vdm = new VirtualDirectoryMapping(configFile.DirectoryName, true, configFile.Name); var wcfm = new WebConfigurationFileMap(); wcfm.VirtualDirectories.Add("/", vdm); return WebConfigurationManager.OpenMappedWebConfiguration(wcfm, "/"); } static void Main(string[] args) { try{ string path = @"C:\Users\vrybak\Desktop\Web.config"; var configManager = OpenConfigFile(path); var configSection = configManager.GetSection("MyConfigSection") as MyConfigSection; MyConfigElement elem = configSection.MyElement; } catch (ConfigurationErrorsException ex){ Console.WriteLine(ex.ToString()); } } Here is what my config file looks like. <?xml version="1.0"?> <configuration> <configSections> <section name="MyConfigSection" type="configurationFrameworkTestHarness.MyConfigSection, configurationFrameworkTestHarness" /> </configSections> <MyConfigSection> </MyConfigSection> The wierd part is that if I open the config file and load the section 2 times in a row, I will get the exception that I expect. var configManager = OpenConfigFile(path); var configSection = configManager.GetSection("MyConfigSection") as MyConfigSection; configManager = OpenConfigFile(path); configSection = configManager.GetSection("MyConfigSection") as MyConfigSection; If I use the code above, then the exception will fire and tell me that MyConfigElement is required. The question is Why is it not throwing this exception the first time??

    Read the article

< Previous Page | 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523  | Next Page >