Search Results

Search found 262 results on 11 pages for 'peek'.

Page 10/11 | < Previous Page | 6 7 8 9 10 11  | Next Page >

  • Loading XML file containing leading zeros with SSIS preserving the zeros

    - by Compudicted
    Visiting the MSDN SQL Server Integration Services Forum oftentimes I could see that people would pop up asking this question: “why I am not able to load an element from an XML file that contains zeros so the leading/trailing zeros would remain intact?”. I started to suspect that such a trivial and often-required operation perhaps is being misunderstood by the developer community. I would also like to add that the whole state of affairs surrounding the XML today is probably also going to be increasingly affected by a motion of people who dislike XML in general and many aspects of it as XSD and XSLT invoke a negative reaction at best. Nevertheless, XML is in wide use today and its importance as a bridge between diverse systems is ever increasing. Therefore, I deiced to write up an example of loading an arbitrary XML file that contains leading zeros in one of its elements using SSIS so the leading zeros would be preserved keeping in mind the goal on simplicity into a table in SQL Server database. To start off bring up your BIDS (running as admin) and add a new Data Flow Task (DFT). This DFT will serve as container to adding our XML processing elements (besides, the XML Source is not available anywhere else other than from within the DFT). Double-click your DFT and drag and drop the XMS Source component from the Tool Box’s Data Flow Sources. Now, let the fun begin! Being inspired by the upcoming Christmas I created a simple XML file with one set of data that contains an imaginary SSN number of Rudolph containing several leading zeros like 0000003. This file can be viewed here. To configure the XML Source of course it is quite intuitive to point it to our XML file, next what the XML source needs is either an embedded schema (XSD) or it can generate one for us. In lack of the one I opted to auto-generate it for me and I ended up with an XSD that looked like: <?xml version="1.0"?> <xs:schema attributeFormDefault="unqualified" elementFormDefault="qualified" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="XMasEvent"> <xs:complexType> <xs:sequence> <xs:element minOccurs="0" name="CaseInfo"> <xs:complexType> <xs:sequence> <xs:element minOccurs="0" name="ID" type="xs:unsignedByte" /> <xs:element minOccurs="0" name="CreatedDate" type="xs:unsignedInt" /> <xs:element minOccurs="0" name="LastName" type="xs:string" /> <xs:element minOccurs="0" name="FirstName" type="xs:string" /> <xs:element minOccurs="0" name="SSN" type="xs:unsignedByte" /> <!-- Becomes string -- > <xs:element minOccurs="0" name="DOB" type="xs:unsignedInt" /> <xs:element minOccurs="0" name="Event" type="xs:string" /> <xs:element minOccurs="0" name="ClosedDate" /> </xs:sequence> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> As an aside on the XML file: if your XML file does not contain the outer node (<XMasEvent>) then you may end up in a situation where you see just one field in the output. Now please note that the SSN element’s data type was chosen to be of unsignedByte (and this is for a reason). The reason is stemming from the fact all our figures in the element are digits, this is good, but this is not exactly what we need, because if we will attempt to load the data with this XSD then we are going to either get errors on the destination or most typically lose the leading zeros. So the next intuitive choice is to change the data type to string. Besides, if a SSIS package was already created based on this XSD and the data type change was done thereafter, one should re-set the metadata by right-clicking the XML Source and choosing “Advanced Editor” in which there is a refresh button at the bottom left which will do the trick. So far so good, we are ready to load our XML file, well actually yes, and no, in my experience typically some data conversion may be required. So depending on your data destination you may need to tweak the data types targeted. Let’s add a Data Conversion Task to our DFT. Your package should look like: To make the story short I only will cover the SSN field, so in my data source the target SQL Table has it as nchar(10) and we chose string in our XSD (yes, this is a big difference), under such circumstances the SSIS will complain. So will go and manipulate on the data type of SSN by making it Unicode String (DT_WSTR), World String per se. The conversion should look like: The peek at the Metadata: We are almost there, now all we need is to configure the destination. For simplicity I chose SQL Server Destination. The mapping is a breeze, F5 and I am able to insert my data into SQL Server now! Checking the zeros – they are all intact!

    Read the article

  • WMemoryProfiler is Released

    - by Alois Kraus
    What is it? WMemoryProfiler is a managed profiling Api to aid integration testing. This free library can get managed heap statistics and memory usage for your own process (remember testing) and other processes as well. The best thing is that it does work from .NET 2.0 up to .NET 4.5 in x86 and x64. To make it more interesting it can attach to any running .NET process. The reason why I do mention this is that commercial profilers do support this functionality only for their professional editions. An normally only since .NET 4.0 since the profiling API only since then does support attaching to a running process. This thing does differ in many aspects from “normal” profilers because while profiling yourself you can get all objects from all managed heaps back as an object array. If you ever wanted to change the state of an object which does only exist a method local in another thread you can get your hands on it now … Enough theory. Show me some code /// <summary> /// Show feature to not only get statisics out of a process but also the newly allocated /// instances since the last call to MarkCurrentObjects. /// GetNewObjects does return the newly allocated objects as object array /// </summary> static void InstanceTracking() { using (var dumper = new MemoryDumper()) // if you have problems use to see the debugger windows true,true)) { dumper.MarkCurrentObjects(); Allocate(); ILookup<Type, object> newObjects = dumper.GetNewObjects() .ToLookup( x => x.GetType() ); Console.WriteLine("New Strings:"); foreach (var newStr in newObjects[typeof(string)] ) { Console.WriteLine("Str: {0}", newStr); } } } … New Strings: Str: qqd Str: String data: Str: String data: 0 Str: String data: 1 … This is really hot stuff. Not only you can get heap statistics but you can directly examine the new objects and make queries upon them. When I do find more time I can reconstruct the object root graph from it from my own process. It this cool or what? You can also peek into the Finalization Queue to check if you did accidentally forget to dispose a whole bunch of objects … /// <summary> /// .NET 4.0 or above only. Get all finalizable objects which are ready for finalization and have no other object roots anymore. /// </summary> static void NotYetFinalizedObjects() { using (var dumper = new MemoryDumper()) { object[] finalizable = dumper.GetObjectsReadyForFinalization(); Console.WriteLine("Currently {0} objects of types {1} are ready for finalization. Consider disposing them before.", finalizable.Length, String.Join(",", finalizable.ToLookup( x=> x.GetType() ) .Select( x=> x.Key.Name)) ); } } How does it work? The W of WMemoryProfiler is a good hint. It does employ Windbg and SOS dll to do the heavy lifting and concentrates on an easy to use Api which does hide completely Windbg. If you do not want to see Windbg you will never see it. In my experience the most complex thing is actually to download Windbg from the Windows 8 Stanalone SDK. This is described in the Readme and the exception you are greeted with if it is missing in much greater detail. So I will not go into this here.   What Next? Depending on the feedback I do get I can imagine some features which might be useful as well Calculate first order GC Roots from the actual object graph Identify global statics in Types in object graph Support read out of finalization queue of .NET 2.0 as well. Support Memory Dump analysis (again a feature only supported by commercial profilers in their professional editions if it is supported at all) Deserialize objects from a memory dump into a live process back (this would need some more investigation but it is doable) The last item needs some explanation. Why on earth would you want to do that? The basic idea is to store in your live process some logging/tracing data which can become quite big but since it is never written to it is very fast to generate. When your process crashes with a memory dump you could transfer this data structure back into a live viewer which can then nicely display your program state at the point it did crash. This is an advanced trouble shooting technique I have not seen anywhere yet but it could be quite useful. You can have here a look at the current feature list of WMemoryProfiler with some examples.   How To Get Started? First I would download the released source package (it is tiny). And compile the complete project. Then you can compile the Example project (it has this name) and uncomment in the main method the scenario you want to check out. If you are greeted with an exception it is time to install the Windows 8 Standalone SDK which is described in great detail in the exception text. Thats it for the first round. I have seen something more limited in the Java world some years ago (now I cannot find the link anymore) but anyway. Now we have something much better.

    Read the article

  • What's New in Oracle VM VirtualBox 4.2?

    - by Fat Bloke
    A year is a long time in the IT industry. Since the last VirtualBox feature release, which was a little over a year ago, we've seen: new releases of cool new operating systems, such as Windows 8, ChromeOS, and Mountain Lion; we've seen a myriad of new Linux releases from big Enterprise class distributions like Oracle 6.3, to accessible desktop distros like Ubuntu 12.04 and Fedora 17; and we've also seen the spec of a typical PC or laptop double in power. All of these events have influenced our new VirtualBox version which we're releasing today. Here's how... Powerful hosts  One of the trends we've seen is that as the average host platform becomes more powerful, our users are consistently running more and more vm's. Some of our users have large libraries of vm's of various vintages, whilst others have groups of vm's that are run together as an assembly of the various tiers in a multi-tiered software solution, for example, a database tier, middleware tier, and front-ends.  So we're pleased to unveil a more powerful VirtualBox Manager to address the needs of these users: VM Groups Groups allow you to organize your VM library in a sensible way, e.g.  by platform type, by project, by version, by whatever. To create groups you can drag one VM onto another or select one or more VM's and choose Machine...Group from the menu bar. You can expand and collapse groups to save screen real estate, and you can Enter and Leave a group (think iPad navigation here) by using the right and left arrow keys when groups are selected. But groups are more than passive folders, because you can now also perform operations on groups, rather than all the individual VMs. So if you have a multi-tiered solution you can start the whole stack up with just one click. Autostart Many VirtualBox users run dedicated services in their VMs, for example, running a Wiki. With these types of VM workloads, you really want the VM start up when the host machine boots up. So with 4.2 we've introduced a cross-platform Auto-start mechanism to allow you to treat VMs as host services. Headless VM Launching With VM's such as web servers, wikis, and other types of server-class workloads, the Console of the VM is pretty much redundant. For some time now VirtualBox has offered a separate launch mechanism for these VM's, namely the command-line interface commands VBoxHeadless or VBoxManage startvm ... --type headless commands. But with 4.2 we also allow you launch headless VMs from the Manager. Simply hold down Shift when launching the VM from the Manager.  It's that easy. But how do you stop a headless VM? Well, with 4.2 we allow you to Close the VM from the Manager. (BTW best to use the ACPI Shutdown method which allows the guest VM to close down gracefully.) Easy VM Creation For our expert users, the  New VM Wizard was a little tiresome, so now there's a faster 2-click VM creation mode. Just Hide the description when creating a new VM. Powerful VMs  As the hosts have become more powerful, so are the guests that are running inside them. Here are some of the 4.2 features to accommodate them: Virtual Network Interface Cards  With 4.2, it's now possible to create VMs with up to 36 NICs, when using the ICH9 chipset emulation. But with great power comes great responsibility (didn't Obi-Wan say something similar?), and so we have also introduced bandwidth limiting to prevent a rogue VM stealing the whole pipe. VLAN tagging Some of our users leverage VLANs extensively so we've enhanced the E1000 NICs to support this.  Processor Performance If you are running a CPU which supports Nested Paging (aka EPT in the Intel world) such as most of the Core i5 and i7 CPUs, or are running an AMD Bulldozer or later, you should see some performance improvements from our work with these processors. And while we're talking Processors, we've added support for some of the more modern VIA CPUs too. Powerful Automation Because VirtualBox runs atop a fully blown operating system, it makes sense to leverage the capabilities of the host to run scripts that can drive the guest VMs. Guest Automation was introduced in a prior release but with 4.2 we've revamped the APIs to allow a richer and more powerful set of operations to be executed by the guest. Check out the IGuest APIs in the VirtualBox Programming Guide and Reference (SDK). Powerful Platforms  All the hardcore engineering that has gone into 4.2 has been done for a purpose and that is to deliver a fast and powerful engine that can run almost any x86 OS because of the integrity of the virtualization. So we're pleased to add support for these platforms: Mac OS X "Mountain Lion"  Windows 8 Windows Server 2012 Ubuntu 12.04 (“Precise Pangolin”) Fedora 17 Oracle Linux 6.3  Here's the proof: We don't have time to go into the myriad of smaller improvements such as support for burning audio CDs from a guest, bi-directional clipboard control,  drag-and-drop of files into Linux guests, etc. so we'll leave that as an exercise for the user as soon as you've downloaded from the Oracle or community site and taken a peek at the User Guide. So all in all, a pretty solid release, one that we hope you'll enjoy discovering. - FB 

    Read the article

  • How to arrange models, views, controllers in a Kohana 3 project

    - by Pekka
    I'm looking at how to set up a mid-sized web application with Kohana 3. I have implemented MVC patterns in the past but never worked against a "formalized" MVC framework so I'm still getting my head around the terminology - toying around with basic examples, building views and templates, and so on. I'm progressing fairly well but I want to set up a real-world web project (one of my own that I've been planning for quite some time now) as a learning object. I learn best by example, but example-based documentation is a bit sparse for Kohana 3 right now - they say so themselves on the site. While I'm not worried about getting into the framework soon enough, I'm a bit concerned about arranging a healthily structured code base from the start - i.e. how to split up controllers, how to name them, and how to separate the functionality into the appropriate models. My application could, in its core, be described as a business directory with a main businesses table. Businesses can be listed by category and by street name. Each business has a detail page. Business owners can log in and edit their business's entry. Businesses can post offers into an offers table. I know this is not very detailed, but I don't want to cram too much information into this question. I'll be more than happy to go into more detail if needed. Supposing I have all the basic functionality worked out and in place already - list all businesses, edit business, list businesses by street name, create offer logged in as business, and so on, and I'm just looking for how to fit the functionality into a MVC pattern and into a Kohana application structure that can be easily extended: Do you know real-life, publicly accessible examples of "database-heavy" applications like directories, online communities... with a log-in area built on Kohana 3 where I could take a peek how they do it? Are there conventions or best practices on how to structure an extendable login area for end users in a Kohana project that is not only able to handle a business directory page, but further products on separate pages as well? Do you know application structuring HOWTOs or best practices for Kohana 3 not mentioned in the user guide and the inofficial Wiki? Have you built something similar and could give me some recommendations?

    Read the article

  • Arranging models, views, controllers in a Kohana 3 project

    - by Pekka
    I'm looking at how to set up a mid-sized web application with Kohana 3. I have implemented MVC patterns in the past but never worked against a "formalized" MVC framework so I'm still getting my head around the terminology - toying around with basic examples, building views and templates, and so on. I'm progressing fairly well but I want to set up a real-world web project (one of my own that I've been planning for quite some time now) as a learning object. I learn best by example, but example-based documentation is a bit sparse for Kohana 3 right now - they say so themselves on the site. While I'm not worried about getting into the framework soon enough, I'm a bit concerned about arranging a healthily structured code base from the start - i.e. how to split up controllers, how to name them, and how to separate the functionality into the appropriate models. My application could, in its core, be described as a business directory with a main businesses table. Businesses can be listed by category and by street name. Each business has a detail page. Business owners can log in and edit their business's entry. Businesses can post offers into an offers table. I know this is pretty vague, but I don't want to cram too much information into this question. I'll be more than happy to go into more detail if needed. Supposing I have all the basic functionality worked out and in place already - list all businesses, edit business, list businesses by street name, create offer, and so on, and I'm just looking for how to fit the functionality into a MVC pattern and into a Kohana application structure that can be easily extended: Do you know real-life, publicly accessible examples of "database-heavy" applications like directories, online communities... with a log-in area built on Kohana 3 where I could take a peek how they do it? Are there conventions or best practices on how to structure an extendable login area for end users in a Kohana project that is not only able to handle a business directory page, but further products on separate pages as well? Do you know application structuring HOWTOs or best practices for Kohana 3 not mentioned in the user guide and the inofficial Wiki? Have you built something similar and could give me some recommendations?

    Read the article

  • Java Socket Connection is flooding network OR resulting in high ping

    - by user1461100
    i have a little problem with my java socket code. I'm writing an android client application which is sending data to a java multithreaded socket server on my pc through direct(!) wireless connection. It works fine but i want to improve it for mobile applications as it is very power consuming by now. When i remove two special lines in my code, the cpu usage of my mobile device (htc one x) is totally okay but then my connection seems to have high ping rates or something like that... Here is a server code snippet where i receive the clients data: while(true) { try { .... Object obj = in.readObject(); if(obj != null) { Class clazz = obj.getClass(); String className = clazz.getName(); if(className.equals("java.lang.String")) { String cmd = (String)obj; if(cmd.equals("dc")) { System.out.println("Client "+id+" disconnected!"); Server.connectedClients[id-1] = false; break; } if(cmd.substring(0,1).equals("!")) { robot.keyRelease(PlayerEnum.getKey(cmd,id)); } else { robot.keyPress(PlayerEnum.getKey(cmd,id)); } } } } catch .... Heres the client part, where i send my data in a while loop: private void networking() { try { if(client != null) { .... out.writeObject(sendQueue.poll()); .... } } catch .... when i write it this why, i send data everytime the while loop gets executed.. when sendQueue is empty, a null "Object" will be send. this results in "high" network traffic and in "high" cpu usage. BUT: all send comments are received nearly immediately. when i change the code to following: while(true) ... if(sendQueue.peek() != null) { out.writeObject(sendQueue.poll()); } ... the cpu usage is totally okay but i'm getting some laggs.. the commands do not arrive fast enough.. as i said, it works fine (besides cpu usage) if i'm sending data(with that null objects) every while execution. but i'm sure that this is very rough coding style because i'm kind of flooding the network. any hints? what am i doing wrong?? Thanks for your Help! Sincerly yours, maaft

    Read the article

  • In Google Glass, Menu Items are not shown after XE 17.2 Update, any Solutions?

    - by Amalan Dhananjayan
    This worked when the Glass in on XE12, I have opened the solution after about 2 Months and now with XE17 the menu items are not shown when tapped on the Live card, instead the live card is disappearing. I have updated the GDK, I have changed the code to support the latest GDK sneak peek version 2 changes according to this (https://developers.google.com/glass/release-notes#xe12) This is the code public class MenuActivity extends Activity { private static final String TAG = MenuActivity.class.getSimpleName(); private VisionService.VisionBinder mVisionService; private ServiceConnection mConnection = new ServiceConnection() { @Override public void onServiceConnected(ComponentName name, IBinder service) { if (service instanceof VisionService.VisionBinder) { mVisionService = (VisionService.VisionBinder) service; openOptionsMenu(); } // No need to keep the service bound. unbindService(this); } @Override public void onServiceDisconnected(ComponentName name) { } }; private boolean mResumed; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); bindService(new Intent(this, VisionService.class), mConnection, 0); } @Override protected void onResume() { super.onResume(); mResumed = true; openOptionsMenu(); } @Override protected void onPause() { super.onPause(); mResumed = false; } @Override public void openOptionsMenu() { if (mResumed && mConnection != null) { super.openOptionsMenu(); } } @Override public boolean onCreateOptionsMenu(Menu menu) { getMenuInflater().inflate(R.menu.menu, menu); return true; } @Override public boolean onOptionsItemSelected(MenuItem item) { int id = item.getItemId(); if (id == R.id.action_send) { mVisionService.requestWorkOrderCard(); finish(); return true; } else if (id == R.id.action_refresh) { mVisionService.requestTopWorkOrders(); finish(); return true; } else if (id == R.id.action_finish) { stopService(new Intent(this, VisionService.class)); finish(); return true; } else { return super.onOptionsItemSelected(item); } } @Override public void onOptionsMenuClosed(Menu menu) { super.onOptionsMenuClosed(menu); } } It would be great if any body could help on this. Thank You

    Read the article

  • The Complete List of iPad Tips, Tricks, and Tutorials

    - by Ross
    The Apple iPad is the latest new toy, and we’ve put together a comprehensive list of every tip, trick, and tutorial that we could find to help you get the most out of it—and we’re even giving one away to one lucky reader. So read on! Note: We’ll be keeping this page updated as we find more great articles, so you should bookmark this page for future reference. Want Your Own iPad? How-To Geek is Giving One Away! All you have to do to enter is become a fan of our Facebook page, and we’ll pick a random fan to win the prize. Win an iPad on the How-To Geek Facebook Fan Page Disable the “clicking sound” on the iPad Keyboard Does the clicking sound when you tap the iPad keyboard bother you? Thankfully it’s easy to disable with a couple of taps. How to disable the “clicking sound” on your iPad’s keyboard Enable and add bookmarks to the Safari Bookmarks Bar on your iPad By default, Safari doesn’t display the Bookmarks Bar. This tip shows you how to change that. How to enable and add bookmarks to the Safari Bookmarks Bar on your iPad Clear the Cache, History and Cookies in Safari for the iPad You’re probably used to clearing this kind of data right from within the browser. Not so with Safari on the iPad – but here’s how you can. How to clear the cache, history and cookies in Safari for iPad How to add more Apps to your iPad Dock The iPad has four icons in its ‘dock’. Did you know it can hold 6? How to add more Apps to your iPad Dock Convert PDF files to ePub files to read on your iPad with iBooks ePub is the format that iBooks are in. So for those of you with large eBook collections in PDF, here’s how you convert them to read in iBooks. How to convert PDF files to ePub files to read on your iPad with iBooks How to force your iPad to restart Has an app caused your iPad to freeze up, and you can’t escape? This tip shows you how to force your iPad to restart. How to force your iPad to restart How to export Keynote for iPad presentations to your Mac or PC Exporting Keynote presentations from your iPad to your Mac or PC isn’t as straight forward as you might have expected. This tutorial shows you how. How to export Keynote for iPad presentations to your Mac or PC How to import presentations to Keynote on your iPad Having trouble getting your presentations onto your iPad? How to import presentations to Keynote on your iPad How to import documents to Pages on your iPad This guide shows you how to transfer documents (MS Word or Pages) from your Mac/PC to your iPad. How to import documents to Pages on your iPad How to insert photos in a Pages document using iPad and share it as a PDF Want to spice up that doc with a picture you just took? This tutorial will show you how – and how to export that document as a PDF. How to insert photos in a Pages document using iPad and share it as a PDF How to lock your iPad If you have kids or co-workers/friends who think it’s funny to mess with your iPad – lock it. How to lock your iPad How to remove the “Sent from my iPad” signature from outgoing email on your iPad Does everyone need to know you just sent that email from your iPad? Probably not. This guide shows you how to remove the “Sent from my iPad” signature and replace it with your own (or none). How to remove the “Sent from my iPad” signature from outgoing email on your iPad How To Sync Multiple Calendars to the iPad With Google Sync This tutorial will show you a workaround on how to sync multiple calendars on your iPad using Google Sync. How to Sync Multiple Calendars to the iPad With Google Sync How to determine the MAC address of your iPad If your network restricts connections via MAC address – this guide will show you how to determine what yours is. How to determine the MAC address of your iPad How to take a screenshot of your iPad Do you need to take a screenshot of your iPad? This quick tip shows you how to do just that. How to take a screenshot of your iPad How to delete apps from your iPod Touch, iPhone or iPad Anyone who had an iPod Touch or iPhone before they had an iPad won’t need this tutorial. But if you’re new to the experience, this one will help. How to delete apps from your iPod Touch, iPhone or iPad How to determine the iPad ECID on Windows and Mac iPadintosh shows us how to determine the iPad’s ECID code – something you’ll want to have come Jailbreak time. How to grab the iPad ECID in Windows or OS X iPad Apps: Twitter and social networking essentials Enggadget has you covered with reviews of the first slew of iPad specific Twitter and other social networking apps. iPad Apps: Twitter and social networking essentials What does your website look like on an iPad? iPad Peek is a web based tool that allows you to enter any given URL, and it will display that page the same way Safari on the iPad does. Great for web site owners who don’t have access to an iPad. iPadPeek Stream Music and Videos to your iPad Gizmodo reviews the iPad app StreamToMe, which allows you to stream media from your Mac to your iPad across your local network. Their feelings in a nutshell – worth the $3, but not perfect. Review: StreamToMe for the iPad Apple iPad : Change links in Google Reader to point to full HTML webpage How to change links in Safari for iPad so that Google Reader points to a full HTML webpage How to connect an iPad to your existing wireless keyboard This video will show you how to connect your iPad to a wireless keyboard if you’re having any problems – and from the sound of things, quite a few folks are. via TUAW How to get started with the iPad Mashable has a very entry-level guide that will help you set up your iPad for the first time. Mashable’s Guide to Setting up the iPad Essential iPad Apps Downloadsquad gives mini-reviews to 8 iPad apps that you should install as soon as you get your iPad. iPad App Buyers Guide: Essential Apps you should get on day one Videos: The Official iPad Guided Tours From none other than Apple! Great getting started videos for all the included iPad apps. The Official iPad Guided Tours The Official iPad Manual When you buy an iPad, you don’t get a manual. But that’s not to say there isn’t one. Apple provides a 150 guide for your iPad in PDF format. The Official iPad Manual (pdf) How to print from your iPad Sure, it’s actually just an App (PrintCentral – $9.99 USD), but as of right now, it’s the only way. PrintCentral How to make your own iPad Wallpaper A perfectly detailed tutorial on how to make your own wallpaper for your iPad. The author also provides a really nice sample wallpaper, published under the Attribution-Noncommercial 2.0 Generic license. How to make your own iPad Wallpaper Got any more tips? Share them in the comments, and we’ll update the post with the links, or just the tip itself. Similar Articles Productive Geek Tips Want an iPad? How-To Geek is Giving One Away!Why Wait? Amazing New Add-on Turns Your iPhone into an iPad! [Comic]Clear the Auto-Complete Email Address Cache in OutlookAsk the Readers: Share Your Tips for Defeating Viruses and MalwareStupid Geek Tricks: Tile or Cascade Multiple Windows in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Are You Blocked On Gtalk? Find out Discover Latest Android Apps On AppBrain The Ultimate Guide For YouTube Lovers Will it Blend? iPad Edition Penolo Lets You Share Sketches On Twitter Visit Woolyss.com for Old School Games, Music and Videos

    Read the article

  • Week in Geek: US Govt E-card Scam Siphons Confidential Data Edition

    - by Asian Angel
    This week we learned how to “back up photos to Flickr, automate repetitive tasks, & normalize MP3 volume”, enable “stereo mix” in Windows 7 to record audio, create custom papercraft toys, read up on three alternatives to Apple’s flaky iOS alarm clock, decorated our desktops & app docks with Google icon packs, and more. Photo by alexschlegel. Random Geek Links It has been a busy week on the security & malware fronts and we have a roundup of the latest news to help keep you updated. Photo by TopTechWriter.US. US govt e-card scam hits confidential data A fake U.S. government Christmas e-card has managed to siphon off gigabytes of sensitive data from a number of law enforcement and military staff who work on cybersecurity matters, many of whom are involved in computer crime investigations. Security tool uncovers multiple bugs in every browser Michal Zalewski reports that he discovered the vulnerability in Internet Explorer a while ago using his cross_fuzz fuzzing tool and reported it to Microsoft in July 2010. Zalewski also used cross_fuzz to discover bugs in other browsers, which he also reported to the relevant organisations. Microsoft to fix Windows holes, but not ones in IE Microsoft said that it will release two security bulletins next week fixing three holes in Windows, but it is still investigating or working on fixing holes in Internet Explorer that have been reportedly exploited in attacks. Microsoft warns of Windows flaw affecting image rendering Microsoft has warned of a Windows vulnerability that could allow an attacker to take control of a computer if the user is logged on with administrative rights. Windows 7 Not Affected by Critical 0-Day in the Windows Graphics Rendering Engine While confirming that details on a Critical zero-day vulnerability have made their way into the wild, Microsoft noted that customers running the latest iteration of Windows client and server platforms are not exposed to any risks. Microsoft warns of Office-related malware Microsoft’s Malware Protection Center issued a warning this week that it has spotted malicious code on the Internet that can take advantage of a flaw in Word and infect computers after a user does nothing more than read an e-mail. *Refers to a flaw that was addressed in the November security patch releases. Make sure you have all of the latest security updates installed. Unpatched hole in ImgBurn disk burning application According to security specialist Secunia, a highly critical vulnerability in ImgBurn, a lightweight disk burning application, can be used to remotely compromise a user’s system. Hole in VLC Media Player Virtual Security Research (VSR) has identified a vulnerability in VLC Media Player. In versions up to and including 1.1.5 of the VLC Media Player. Flash Player sandbox can be bypassed Flash applications run locally can read local files and send them to an online server – something which the sandbox is supposed to prevent. Chinese auction site touts hacked iTunes accounts Tens of thousands of reportedly hacked iTunes accounts have been found on Chinese auction site Taobao, but the company claims it is unable to take action unless there are direct complaints. What happened in the recent Hotmail outage Mike Schackwitz explains the cause of the recent Hotmail outage. DOJ sends order to Twitter for Wikileaks-related account info The U.S. Justice Department has obtained a court order directing Twitter to turn over information about the accounts of activists with ties to Wikileaks, including an Icelandic politician, a legendary Dutch hacker, and a U.S. computer programmer. Google gets court to block Microsoft Interior Department e-mail win The U.S. Federal Claims Court has temporarily blocked Microsoft from proceeding with the $49.3 million, five-year DOI contract that it won this past November. Google Apps customers get email lockdown Companies and organisations using Google Apps are now able to restrict the email access of selected users. LibreOffice Is the Default Office Suite for Ubuntu 11.04 Matthias Klose has announced some details regarding the replacement of the old OpenOffice.org 3.2.1 packages with the new LibreOffice 3.3 ones, starting with the upcoming Ubuntu 11.04 (Natty Narwhal) Alpha 2 release. Sysadmin Geek Tips Photo by Filomena Scalise. How to Setup Software RAID for a Simple File Server on Ubuntu Do you need a file server that is cheap and easy to setup, “rock solid” reliable, and has Email Alerting? This tutorial shows you how to use Ubuntu, software RAID, and SaMBa to accomplish just that. How to Control the Order of Startup Programs in Windows While you can specify the applications you want to launch when Windows starts, the ability to control the order in which they start is not available. However, there are a couple of ways you can easily overcome this limitation and control the startup order of applications. Random TinyHacker Links Using Opera Unite to Send Large Files A tutorial on using Opera Unite to easily send huge files from your computer. WorkFlowy is a Useful To-do List Tool A cool to-do list tool that lets you integrate multiple tasks in one single list easily. Playing Flash Videos on iOS Devices Yes, you can play flash videos on jailbroken iPhones. Here’s a tutorial. Clear Safari History and Cookies On iPhone A tutorial on clearing your browser history on iPhone and other iOS devices. Monitor Your Internet Usage Here’s a cool, cross-platform tool to monitor your internet bandwidth. Super User Questions See what the community had to say on these popular questions from Super User this week. Why is my upload speed much less than my download speed? Where should I find drivers for my laptop if it didn’t come with a driver disk? OEM Office 2010 without media – how to reinstall? Is there a point to using theft tracking software like Prey on my laptop, if you have login security? Moving an “all-in-one” PC when turned on/off How-To Geek Weekly Article Recap Get caught up on your HTG reading with our hottest articles from this past week. How to Combine Rescue Disks to Create the Ultimate Windows Repair Disk How To Boot 10 Different Live CDs From 1 USB Flash Drive What is Camera Raw, and Why Would a Professional Prefer it to JPG? Did You Know Facebook Has Built-In Shortcut Keys? The How-To Geek Guide to Audio Editing: The Basics One Year Ago on How-To Geek Enjoy looking through our latest gathering of retro article goodness. Learning Windows 7: Create a Homegroup & Join a New Computer To It How To Disconnect a Machine from a Homegroup Use Remote Desktop To Access Other Computers On a Small Office or Home Network How To Share Files and Printers Between Windows 7 and Vista Allow Users To Run Only Specified Programs in Windows 7 The Geek Note That is all we have for you this week and we hope your first week back at work or school has gone very well now that the holidays are over. Know a great tip? Send it in to us at [email protected]. Photo by Pamela Machado. Latest Features How-To Geek ETC HTG Projects: How to Create Your Own Custom Papercraft Toy How to Combine Rescue Disks to Create the Ultimate Windows Repair Disk What is Camera Raw, and Why Would a Professional Prefer it to JPG? The How-To Geek Guide to Audio Editing: The Basics How To Boot 10 Different Live CDs From 1 USB Flash Drive The 20 Best How-To Geek Linux Articles of 2010 Arctic Theme for Windows 7 Gives Your Desktop an Icy Touch Install LibreOffice via PPA and Receive Auto-Updates in Ubuntu Creative Portraits Peek Inside the Guts of Modern Electronics Scenic Winter Lane Wallpaper to Create a Relaxing Mood Access Your Web Apps Directly Using the Context Menu in Chrome The Deep – Awesome Use of Metal Objects as Deep Sea Creatures [Video]

    Read the article

  • GZip/Deflate Compression in ASP.NET MVC

    - by Rick Strahl
    A long while back I wrote about GZip compression in ASP.NET. In that article I describe two generic helper methods that I've used in all sorts of ASP.NET application from WebForms apps to HttpModules and HttpHandlers that require gzip or deflate compression. The same static methods also work in ASP.NET MVC. Here are the two routines:/// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("gzip")) { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } else { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } } // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); } The first method checks whether the client sending the request includes the accept-encoding for either gzip or deflate, and if if it does it returns true. The second function uses IsGzipSupported() to decide whether it should encode content and uses an Response Filter to do its job. Basically response filters look at the Response output stream as it's written and convert the data flowing through it. Filters are a bit tricky to work with but the two .NET filter streams for GZip and Deflate Compression make this a snap to implement. In my old code and even now in MVC I can always do:public ActionResult List(string keyword=null, int category=0) { WebUtils.GZipEncodePage(); …} to encode my content. And that works just fine. The proper way: Create an ActionFilterAttribute However in MVC this sort of thing is typically better handled by an ActionFilter which can be applied with an attribute. So to be all prim and proper I created an CompressContentAttribute ActionFilter that incorporates those two helper methods and which looks like this:/// <summary> /// Attribute that can be added to controller methods to force content /// to be GZip encoded if the client supports it /// </summary> public class CompressContentAttribute : ActionFilterAttribute { /// <summary> /// Override to compress the content that is generated by /// an action method. /// </summary> /// <param name="filterContext"></param> public override void OnActionExecuting(ActionExecutingContext filterContext) { GZipEncodePage(); } /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("gzip")) { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } else { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } } // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); } } It's basically the same code wrapped into an ActionFilter attribute, which intercepts requests MVC requests to Controller methods and lets you hook up logic before and after the methods have executed. Here I want to override OnActionExecuting() which fires before the Controller action is fired. With the CompressContentAttribute created, it can now be applied to either the controller as a whole:[CompressContent] public class ClassifiedsController : ClassifiedsBaseController { … } or to one of the Action methods:[CompressContent] public ActionResult List(string keyword=null, int category=0) { … } The former applies compression to every action method, while the latter is selective and only applies it to the individual action method. Is the attribute better than the static utility function? Not really, but it is the standard MVC way to hook up 'filter' content and that's where others are likely to expect to set options like this. In fact,  you have a bit more control with the utility function because you can conditionally apply it in code, but this is actually much less likely in MVC applications than old WebForms apps since controller methods tend to be more focused. Compression Caveats Http compression is very cool and pretty easy to implement in ASP.NET but you have to be careful with it - especially if your content might get transformed or redirected inside of ASP.NET. A good example, is if an error occurs and a compression filter is applied. ASP.NET errors don't clear the filter, but clear the Response headers which results in some nasty garbage because the compressed content now no longer matches the headers. Another issue is Caching, which has to account for all possible ways of compression and non-compression that the content is served. Basically compressed content and caching don't mix well. I wrote about several of these issues in an old blog post and I recommend you take a quick peek before diving into making every bit of output Gzip encoded. None of these are show stoppers, but you have to be aware of the issues. Related Posts GZip Compression with ASP.NET Content ASP.NET GZip Encoding Caveats© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Test All Features of Windows Phone 7 On Your PC

    - by Matthew Guay
    Are you developer or just excited about the upcoming Windows Phone 7, and want to try it out now?  Thanks to free developer tools from Microsoft and a new unlocked emulator rom, you can try out most of the exciting features today from your PC. Last week we showed you how to try out Windows Phone 7 on your PC and get started developing for the upcoming new devices.  We noticed, however, that the emulator only contains Internet Explorer Mobile and some settings.  This is still interesting to play around with, but it wasn’t the full Windows Phone 7 experience. Some enterprising tweakers discovered that more applications were actually included in the emulator, but were simply hidden from users.  Developer Dan Ardelean then figured out how to re-enable these features, and released a tweaked emulator rom so everyone can try out all of the Windows Phone 7 features for themselves.  Here we’ll look at how you can run this new emulator image on your PC, and then look at some interesting features in Windows Phone 7. Editor Note: This modified emulator image is not official, and isn’t sanctioned by Microsoft. Use your own judgment when choosing to download and use the emulator. Setting Up Emulator Rom To test-drive Windows Phone 7 on your PC, you must first download and install the Windows Phone Developer Tools CTP (link below).  Follow the steps we showed you last week at: Try out Windows Phone 7 on your PC today.  Once it’s installed, go ahead and run the default emulator as we showed to make sure everything works ok. Once the Windows Phone Developer Tools are installed and running, download the new emulator rom from XDA Forums (link below).  This will be a zip file, so extract it first. Note where you save the file, as you will need the address in the next step. Now, to run our new emulator image, we need to open the emulator in command line and point to the new rom image.  To do this, browse to the correct directory, depending on whether you’re running the 32 bit or 64 bit version of Windows: 32 bit: C:\Program Files\Microsoft XDE\1.0\ 64 bit: C:\Program Files (x86)\Microsoft XDE\1.0\ Hold your Shift key down and right-click in the folder.  Choose Open Command Window here. At the command prompt, enter XDE.exe followed by the location of your new rom image.  Here, we downloaded the rom to our download folder, so at the command prompt we entered: XDE.exe C:\Users\Matthew\Downloads\WM70Full\WM70Full.bin The emulator loads … with the full Windows Phone 7 experience! To make it easier, let’s make a shortcut on our desktop to load the emulator with the new rom directly.  Right-click on your desktop (or any folder you want to create the shortcut in), select New, and then Shortcut. Now, in the box, we need to enter the path for the emulator followed by the location of our rom.  Both items must be in quotes.  So, in our test, we entered the following: 32 bit: “C:\Program Files\Microsoft XDE\1.0\” “C:\Users\Matthew\Downloads\WM70Full\WM70Full.bin” 64 bit: “C:\Program Files (x86)\Microsoft XDE\1.0\” “C:\Users\Matthew\Downloads\WM70Full\WM70Full.bin” Make sure to enter the correct location of the new emulator rom for your computer, and keep both items in separate quotes.  Click next when you’ve entered the location. Name the shortcut; we named it Windows Phone 7, but simply enter whatever you’d like.  Click Finish when you’re done. You should now have a nice Windows Phone icon and your fully functional shortcut!  Double-click it to run the Windows Phone 7 emulator as above. Features in the Unlocked Windows Phone 7 Emulator So let’s look at what you can do with this new emulator.  Almost everything you’ve seen in demos from the Mobile World Conference and Mix’10 are right here for you to play with.  Here’s the application menu, which you can access by clicking on the arrow on the top of the home screen, which shows how much stuff they’ve got in this!   And, of course, even the home screen itself shows much more activity than it did in the original emulator. Let’s check out some of these sections.  Here’s Zune running on Windows Phone 7, and the Zune Marketplace.  The animations are beautiful, so be sure to check this out yourself. The new picture hub is much nicer than any picture viewer included with Windows Mobile in the past…   Stay productive, and on schedule with the new Calendar. The XBOX hub gives us only a hint of things to come, and the links to games now are simply placeholders. Here’s a look at the Office hub.  This doesn’t show up on the homescreen right now, but you can access it in the applications menu.  Office obviously still has a lot of work left on it, but even at a glance here it looks like it includes a lot more functionality than Office Mobile in Windows Mobile 6. Here’s a look at each of the three apps: Word, Excel, and OneNote, and the formatting pallet in Office apps.   This emulator also includes a lot more settings than the default one, including settings for individual applications. You can even activate the screen lock, and try out the lift-to-peek-or-unlock feature… Finally, this version of Windows Phone 7 includes a very nice SystemInfo app with an advanced task manager.  We hope this is still available when the actual phones are released. Conclusion If you’re excited about the upcoming Windows Phone 7 series, or simply want to learn more about what’s coming, this is a great way to test it out.  With these exciting new hubs and applications, there’s something here for everyone.  Let us know what you like most about Windows Phone 7 and what your favorite app or hub is. Links Please note: These roms are not officially supported by Microsoft, and could be taken down. Download the unlocked Windows Phone 7 emulator from XDA Forums – click the link in this post to download How the unlocked emulator image was created Similar Articles Productive Geek Tips Try out Windows Phone 7 on your PC todayGet stats on your Ruby on Rails codeDisable Windows Vista’s Built-in CD/DVD Burning FeaturesWeek in Geek – The Slick Windows 7 File Copy Animation EditionGeek Fun: Virtualized Old School Windows – Windows 95 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Get Better Windows Search With UltraSearch Scan News With NY Times Article Skimmer SpeedyFox Claims to Speed up your Firefox Beware Hover Kitties Test Drive Mobile Phones Online With TryPhone Ben & Jerry’s Free Cone Day, 3/23/10

    Read the article

  • Session memory – who’s this guy named Max and what’s he doing with my memory?

    - by extended_events
    SQL Server MVP Jonathan Kehayias (blog) emailed me a question last week when he noticed that the total memory used by the buffers for an event session was larger than the value he specified for the MAX_MEMORY option in the CREATE EVENT SESSION DDL. The answer here seems like an excellent subject for me to kick-off my new “401 – Internals” tag that identifies posts where I pull back the curtains a bit and let you peek into what’s going on inside the extended events engine. In a previous post (Option Trading: Getting the most out of the event session options) I explained that we use a set of buffers to store the event data before  we write the event data to asynchronous targets. The MAX_MEMORY along with the MEMORY_PARTITION_MODE defines how big each buffer will be. Theoretically, that means that I can predict the size of each buffer using the following formula: max memory / # of buffers = buffer size If it was that simple I wouldn’t be writing this post. I’ll take “boundary” for 64K Alex For a number of reasons that are beyond the scope of this blog, we create event buffers in 64K chunks. The result of this is that the buffer size indicated by the formula above is rounded up to the next 64K boundary and that is the size used to create the buffers. If you think visually, this means that the graph of your max_memory option compared to the actual buffer size that results will look like a set of stairs rather than a smooth line. You can see this behavior by looking at the output of dm_xe_sessions, specifically the fields related to the buffer sizes, over a range of different memory inputs: Note: This test was run on a 2 core machine using per_cpu partitioning which results in 5 buffers. (Seem my previous post referenced above for the math behind buffer count.) input_memory_kb total_regular_buffers regular_buffer_size total_buffer_size 637 5 130867 654335 638 5 130867 654335 639 5 130867 654335 640 5 196403 982015 641 5 196403 982015 642 5 196403 982015 This is just a segment of the results that shows one of the “jumps” between the buffer boundary at 639 KB and 640 KB. You can verify the size boundary by doing the math on the regular_buffer_size field, which is returned in bytes: 196403 – 130867 = 65536 bytes 65536 / 1024 = 64 KB The relationship between the input for max_memory and when the regular_buffer_size is going to jump from one 64K boundary to the next is going to change based on the number of buffers being created. The number of buffers is dependent on the partition mode you choose. If you choose any partition mode other than NONE, the number of buffers will depend on your hardware configuration. (Again, see the earlier post referenced above.) With the default partition mode of none, you always get three buffers, regardless of machine configuration, so I generated a “range table” for max_memory settings between 1 KB and 4096 KB as an example. start_memory_range_kb end_memory_range_kb total_regular_buffers regular_buffer_size total_buffer_size 1 191 NULL NULL NULL 192 383 3 130867 392601 384 575 3 196403 589209 576 767 3 261939 785817 768 959 3 327475 982425 960 1151 3 393011 1179033 1152 1343 3 458547 1375641 1344 1535 3 524083 1572249 1536 1727 3 589619 1768857 1728 1919 3 655155 1965465 1920 2111 3 720691 2162073 2112 2303 3 786227 2358681 2304 2495 3 851763 2555289 2496 2687 3 917299 2751897 2688 2879 3 982835 2948505 2880 3071 3 1048371 3145113 3072 3263 3 1113907 3341721 3264 3455 3 1179443 3538329 3456 3647 3 1244979 3734937 3648 3839 3 1310515 3931545 3840 4031 3 1376051 4128153 4032 4096 3 1441587 4324761 As you can see, there are 21 “steps” within this range and max_memory values below 192 KB fall below the 64K per buffer limit so they generate an error when you attempt to specify them. Max approximates True as memory approaches 64K The upshot of this is that the max_memory option does not imply a contract for the maximum memory that will be used for the session buffers (Those of you who read Take it to the Max (and beyond) know that max_memory is really only referring to the event session buffer memory.) but is more of an estimate of total buffer size to the nearest higher multiple of 64K times the number of buffers you have. The maximum delta between your initial max_memory setting and the true total buffer size occurs right after you break through a 64K boundary, for example if you set max_memory = 576 KB (see the green line in the table), your actual buffer size will be closer to 767 KB in a non-partitioned event session. You get “stepped up” for every 191 KB block of initial max_memory which isn’t likely to cause a problem for most machines. Things get more interesting when you consider a partitioned event session on a computer that has a large number of logical CPUs or NUMA nodes. Since each buffer gets “stepped up” when you break a boundary, the delta can get much larger because it’s multiplied by the number of buffers. For example, a machine with 64 logical CPUs will have 160 buffers using per_cpu partitioning or if you have 8 NUMA nodes configured on that machine you would have 24 buffers when using per_node. If you’ve just broken through a 64K boundary and get “stepped up” to the next buffer size you’ll end up with total buffer size approximately 10240 KB and 1536 KB respectively (64K * # of buffers) larger than max_memory value you might think you’re getting. Using per_cpu partitioning on large machine has the most impact because of the large number of buffers created. If the amount of memory being used by your system within these ranges is important to you then this is something worth paying attention to and considering when you configure your event sessions. The DMV dm_xe_sessions is the tool to use to identify the exact buffer size for your sessions. In addition to the regular buffers (read: event session buffers) you’ll also see the details for large buffers if you have configured MAX_EVENT_SIZE. The “buffer steps” for any given hardware configuration should be static within each partition mode so if you want to have a handy reference available when you configure your event sessions you can use the following code to generate a range table similar to the one above that is applicable for your specific machine and chosen partition mode. DECLARE @buf_size_output table (input_memory_kb bigint, total_regular_buffers bigint, regular_buffer_size bigint, total_buffer_size bigint) DECLARE @buf_size int, @part_mode varchar(8) SET @buf_size = 1 -- Set to the begining of your max_memory range (KB) SET @part_mode = 'per_cpu' -- Set to the partition mode for the table you want to generate WHILE @buf_size <= 4096 -- Set to the end of your max_memory range (KB) BEGIN     BEGIN TRY         IF EXISTS (SELECT * from sys.server_event_sessions WHERE name = 'buffer_size_test')             DROP EVENT SESSION buffer_size_test ON SERVER         DECLARE @session nvarchar(max)         SET @session = 'create event session buffer_size_test on server                         add event sql_statement_completed                         add target ring_buffer                         with (max_memory = ' + CAST(@buf_size as nvarchar(4)) + ' KB, memory_partition_mode = ' + @part_mode + ')'         EXEC sp_executesql @session         SET @session = 'alter event session buffer_size_test on server                         state = start'         EXEC sp_executesql @session         INSERT @buf_size_output (input_memory_kb, total_regular_buffers, regular_buffer_size, total_buffer_size)             SELECT @buf_size, total_regular_buffers, regular_buffer_size, total_buffer_size FROM sys.dm_xe_sessions WHERE name = 'buffer_size_test'     END TRY     BEGIN CATCH         INSERT @buf_size_output (input_memory_kb)             SELECT @buf_size     END CATCH     SET @buf_size = @buf_size + 1 END DROP EVENT SESSION buffer_size_test ON SERVER SELECT MIN(input_memory_kb) start_memory_range_kb, MAX(input_memory_kb) end_memory_range_kb, total_regular_buffers, regular_buffer_size, total_buffer_size from @buf_size_output group by total_regular_buffers, regular_buffer_size, total_buffer_size Thanks to Jonathan for an interesting question and a chance to explore some of the details of Extended Event internals. - Mike

    Read the article

  • Building extensions for Expression Blend 4 using MEF

    - by Timmy Kokke
    Introduction Although it was possible to write extensions for Expression Blend and Expression Design, it wasn’t very easy and out of the box only one addin could be used. With Expression Blend 4 it is possible to write extensions using MEF, the Managed Extensibility Framework. Until today there’s no documentation on how to build these extensions, so look thru the code with Reflector is something you’ll have to do very often. Because Blend and Design are build using WPF searching the visual tree with Snoop and Mole belong to the tools you’ll be using a lot exploring the possibilities.  Configuring the extension project Extensions are regular .NET class libraries. To create one, load up Visual Studio 2010 and start a new project. Because Blend is build using WPF, choose a WPF User Control Library from the Windows section and give it a name and location. I named mine DemoExtension1. Because Blend looks for addins named *.extension.dll  you’ll have to tell Visual Studio to use that in the Assembly Name. To change the Assembly Name right click your project and go to Properties. On the Application tab, add .Extension to name already in the Assembly name text field. To be able to debug this extension, I prefer to set the output path on the Build tab to the extensions folder of Expression Blend. This means that everything that used to go into the Debug folder is placed in the extensions folder. Including all referenced assemblies that have the copy local property set to false. One last setting. To be able to debug your extension you could start Blend and attach the debugger by hand. I like it to be able to just hit F5. Go to the Debug tab and add the the full path to Blend.exe in the Start external program text field. Extension Class Add a new class to the project.  This class needs to be inherited from the IPackage interface. The IPackage interface can be found in the Microsoft.Expression.Extensibility namespace. To get access to this namespace add Microsoft.Expression.Extensibility.dll to your references. This file can be found in the same folder as the (Expression Blend 4 Beta) Blend.exe file. Make sure the Copy Local property is set to false in this reference. After implementing the interface the class would look something like: using Microsoft.Expression.Extensibility; namespace DemoExtension1 { public class DemoExtension1:IPackage { public void Load(IServices services) { } public void Unload() { } } } These two methods are called when your addin is loaded and unloaded. The parameter passed to the Load method, IServices services, is your main entry point into Blend. The IServices interface exposes the GetService<T> method. You will be using this method a lot. Almost every part of Blend can be accessed thru a service. For example, you can use to get to the commanding services of Blend by calling GetService<ICommandService>() or to get to the Windowing services by calling GetService<IWindowService>(). To get Blend to load the extension we have to implement MEF. (You can get up to speed on MEF on the community site or read the blog of Mr. MEF, Glenn Block.)  In the case of Blend extensions, all that needs to be done is mark the class with an Export attribute and pass it the type of IPackage. The Export attribute can be found in the System.ComponentModel.Composition namespace which is part of the .NET 4 framework. You need to add this to your references. using System.ComponentModel.Composition; using Microsoft.Expression.Extensibility;   namespace DemoExtension1 { [Export(typeof(IPackage))] public class DemoExtension1:IPackage { Blend is able to find your addin now. Adding UI The addin doesn’t do very much at this point. The WPF User Control Library came with a UserControl so lets use that in this example. I just drop a Button and a TextBlock onto the surface of the control to have something to show in the demo. To get the UserControl to work in Blend it has to be registered with the WindowService.  Call GetService<IWindowService>() on the IServices interface to get access to the windowing services. The UserControl will be used in Blend on a Palette and has to be registered to enable it. This is done by calling the RegisterPalette on the IWindowService interface and passing it an identifier, an instance of the UserControl and a caption for the palette. public void Load(IServices services) { IWindowService windowService = services.GetService<IWindowService>(); UserControl1 uc = new UserControl1(); windowService.RegisterPalette("DemoExtension", uc, "Demo Extension"); } After hitting F5 to start debugging Expression Blend will start. You should be able to find the addin in the Window menu now. Activating this window will show the “Demo Extension” palette with the UserControl, style according to the settings of Blend. Now what? Because little is publicly known about how to access different parts of Blend adding breakpoints in Debug mode and browsing thru objects using the Quick Watch feature of Visual Studio is something you have to do very often. This demo extension can be used for that purpose very easily. Add the click event handler to the button on the UserControl. Change the contructor to take the IServices interface and store this in a field. Set a breakpoint in the Button_Click method. public partial class UserControl1 : UserControl { private readonly IServices _services;   public UserControl1(IServices services) { _services = services; InitializeComponent(); }   private void button1_Click(object sender, RoutedEventArgs e) { } } Change the call to the constructor in the load method and pass it the services property. public void Load(IServices services) { IWindowService service = services.GetService<IWindowService>(); UserControl1 uc = new UserControl1(services); service.RegisterPalette("DemoExtension", uc, "Demo Extension"); } Hit F5 to compile and start Blend. Got to the window menu and start show the addin. Click on  the button to hit the breakpoint. Now place the carrot text _services text in the code window and hit Shift+F9 to show the Quick Watch window. Now start exploring and discovering where to find everything you need.  More Information The are no official resources available yet. Microsoft has released one extension for expression Blend that is very useful as a reference, the Microsoft Expression Blend® Add-in Preview for Windows® Phone. This will install a .extension.dll file in the extension folder of Blend. You can load this file with Reflector and have a peek at how Microsoft is building his addins. Conclusion I hope this gives you something to get started building extensions for Expression Blend. Until Microsoft releases the final version, which hopefully includes more information about building extensions, we’ll have to work on documenting it in the community.

    Read the article

  • Why won't fetchmail work all of a sudden?

    - by SirCharlo
    I ran a chmod 777 * on my home folder. (I know, I know. I'll never do it again.) Ever since then, fetchmail seems to be broken. I use it to fetch mail from an Exchange 2003 mailbox through DAVMail and OWA. The problem is that fetchmail complains about an "expunge mismatch" whenever I get a new message. It deletes the message from the Exchange mailbox, yet it never forwards it. There seems to be a problem somwhere along the mail processing, but I haven't been able to pinpoint where. Any help would be appreciated. Here are the relevant config files. ~/fetchmailrc: set no bouncemail defaults: antispam -1 batchlimit 100 poll localhost with protocol imap and port 1143 user domain\\user password Password is root no rewrite mda "/usr/bin/procmail -f %F -d %T"; ~/procmailrc: :0 * ^Subject.*ack | expand | sed -e 's/[ ]*$//g' | sed -e 's/^/ /' > /usr/local/nagios/libexec/mail_acknowledgement ~/.forward: | "/usr/bin/procmail" And here is the output when I run fetchmail -f /root/.fetchmailrc -vv: fetchmail: WARNING: Running as root is discouraged. Old UID list from localhost: <empty> Scratch list of UIDs: <empty> fetchmail: 6.3.19 querying localhost (protocol IMAP) at Tue 03 Jul 2012 09:46:36 AM EDT: poll started Trying to connect to 127.0.0.1/1143...connected. fetchmail: IMAP< * OK [CAPABILITY IMAP4REV1 AUTH=LOGIN] IMAP4rev1 DavMail 3.9.7-1870 server ready fetchmail: IMAP> A0001 CAPABILITY fetchmail: IMAP< * CAPABILITY IMAP4REV1 AUTH=LOGIN fetchmail: IMAP< A0001 OK CAPABILITY completed fetchmail: Protocol identified as IMAP4 rev 1 fetchmail: GSSAPI error gss_inquire_cred: Unspecified GSS failure. Minor code may provide more information fetchmail: GSSAPI error gss_inquire_cred: fetchmail: No suitable GSSAPI credentials found. Skipping GSSAPI authentication. fetchmail: If you want to use GSSAPI, you need credentials first, possibly from kinit. fetchmail: IMAP> A0002 LOGIN "domain\\user" * fetchmail: IMAP< A0002 OK Authenticated fetchmail: selecting or re-polling default folder fetchmail: IMAP> A0003 SELECT "INBOX" fetchmail: IMAP< * 1 EXISTS fetchmail: IMAP< * 1 RECENT fetchmail: IMAP< * OK [UIDVALIDITY 1] fetchmail: IMAP< * OK [UIDNEXT 344] fetchmail: IMAP< * FLAGS (\Answered \Deleted \Draft \Flagged \Seen $Forwarded Junk) fetchmail: IMAP< * OK [PERMANENTFLAGS (\Answered \Deleted \Draft \Flagged \Seen $Forwarded Junk)] fetchmail: IMAP< A0003 OK [READ-WRITE] SELECT completed fetchmail: 1 message waiting after first poll fetchmail: IMAP> A0004 EXPUNGE fetchmail: IMAP< A0004 OK EXPUNGE completed fetchmail: 1 message waiting after expunge fetchmail: IMAP> A0005 SEARCH UNSEEN fetchmail: IMAP< * SEARCH 1 fetchmail: 1 is unseen fetchmail: IMAP< A0005 OK SEARCH completed fetchmail: 1 is first unseen 1 message for domain\user at localhost. fetchmail: IMAP> A0006 FETCH 1 RFC822.SIZE fetchmail: IMAP< * 1 FETCH (UID 343 RFC822.SIZE 1350) fetchmail: IMAP< A0006 OK FETCH completed fetchmail: IMAP> A0007 FETCH 1 RFC822.HEADER fetchmail: IMAP< * 1 FETCH (UID 343 RFC822.HEADER {1350} reading message domain\user@localhost:1 of 1 (1350 header octets) fetchmail: about to deliver with: /usr/bin/procmail -f '[email protected]' -d 'root' # fetchmail: IMAP< fetchmail: IMAP< fetchmail: IMAP< Bonne journ=E9e.. fetchmail: IMAP< fetchmail: IMAP< Company Name fetchmail: IMAP< My Name fetchmail: IMAP< IT fetchmail: IMAP< Tel: (XXX) XXX-XXXX xXXX fetchmail: IMAP< www.domain.com=20 fetchmail: IMAP< fetchmail: IMAP< fetchmail: IMAP< -----Message d'origine----- fetchmail: IMAP< De=A0: User [mailto:[email protected]]=20 fetchmail: IMAP< Envoy=E9=A0: 2 juillet 2012 15:50 fetchmail: IMAP< =C0=A0: Informatique fetchmail: IMAP< Objet=A0: PROBLEM: photo fetchmail: IMAP< fetchmail: IMAP< Notification Type: PROBLEM fetchmail: IMAP< Author:=20 fetchmail: IMAP< Comment:=20 fetchmail: IMAP< fetchmail: IMAP< Host: Photos fetchmail: IMAP< Hostname: photo fetchmail: IMAP< State: DOWN fetchmail: IMAP< Address: XXX.XX.X.XX fetchmail: IMAP< fetchmail: IMAP< Date/Time: Mon Jul 2 15:49:38 EDT 2012 fetchmail: IMAP< fetchmail: IMAP< Info: CRITICAL - XXX.XX.X.XX: rta nan, lost 100% fetchmail: IMAP< fetchmail: IMAP< fetchmail: IMAP< ) fetchmail: IMAP< A0007 OK FETCH completed fetchmail: IMAP> A0008 FETCH 1 BODY.PEEK[TEXT] fetchmail: IMAP< * 1 FETCH (UID 343 BODY[TEXT] {539} (539 body octets) ******************************* fetchmail: IMAP< ) fetchmail: IMAP< A0008 OK FETCH completed flushed fetchmail: IMAP> A0009 STORE 1 +FLAGS (\Seen \Deleted) fetchmail: IMAP< * 1 FETCH (UID 343 FLAGS (\Seen \Deleted)) fetchmail: IMAP< * 1 EXPUNGE fetchmail: IMAP< A0009 OK STORE completed fetchmail: IMAP> A0010 EXPUNGE fetchmail: IMAP< A0010 OK EXPUNGE completed fetchmail: mail expunge mismatch (0 actual != 1 expected) fetchmail: IMAP> A0011 LOGOUT fetchmail: IMAP< * BYE Closing connection fetchmail: IMAP< A0011 OK LOGOUT completed fetchmail: client/server synchronization error while fetching from domain\user@localhost fetchmail: 6.3.19 querying localhost (protocol IMAP) at Tue 03 Jul 2012 09:46:36 AM EDT: poll completed Merged UID list from localhost: <empty> fetchmail: Query status=7 (ERROR) fetchmail: normal termination, status 7

    Read the article

  • September Independent Oracle User Group (IOUG) Regional Events:

    - by Mandy Ho
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} September 5, 2012 – Denver, CO Oracle 11g Database Upgrade Seminar Join Roy Swonger, Senior Director of software development at Oracle to learn about upgrading to Oracle Database 11g. Topics include: All the required preparatory steps Database upgrade strategies Post-upgrade performance analysis Helpful tips and common pitfalls to watch out for http://www.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=152242&src=7598177&src=7598177&Act=4 September 6, 2012 – Salt Lake City, UT Fall Symposium 2012 Plan to join us for our annual fall event on Sept 6. They day will be filled with learning and networking with tracks focused on Applications, APEX, BI, Development and DBA Topics. This event is free for UTOUG members to attend, but please register. http://www.utoug.org/apex/f?p=972:2:6686308836668467::::P2_EVENT_ID:121 September 6, 2012 – Portland, OR Oracle’s Hands on Workshop Series focused on providing Defense-in-Depth Solutions to secure data at the source, reduce risk and simplify compliance The Oracle Database Security Workshop is a one-day hands-on session for IT Managers, IT Security Architects and Oracle DBAs who are looking for solutions to address their information protection, privacy, and accountability challenges within their Oracle database environment. Most security programs offered today fail toadequately address database security. Customers continue to be challenged tosecure information against loss and protect the integrity of sensitiveinformation like critical financial data, personally identifiable information(PII) and credit card data for PCI compliance. http://nwoug.org/content.aspx?page_id=87&club_id=165905&item_id=241082 September 11, 2012 – Montreal, QC APEXposed! For APEX aficionados – join ODTUG in Montreal, September 11-12 for APEXposed! Topics will include Dynamic Actions, Plug-ins, Tuning, and Building Mobile Apps. The cost is $399 US and early registration ends August 15th. For more information: http://www.odtugapextraining.com  September 11, 2012 – Philadelphia, PA Big Data & What are we still doing wrong with Tom Kyte Tom Kyte is a Senior Technical Architect in Oracle's Server Technology Division. Tom is the Tom behind the AskTom column in Oracle Magazine and is also the author of Expert Oracle Database Architecture (Apress, 2005/2009) among other books Abstract: Big Data The term "big data" draws a lot of attention, but behind the hype there's a simple story. For decades, companies have been making business decisions based on transactional data stored in relational databases. However, beyond that critical data is a potential treasure trove of less structured data: weblogs, social media, email, sensors, and photographs that can be mined for useful information. This presentation will take a look at what Big Data is and means - and Oracle's strategy for handling it Abstract: What are we still doing wrong? I've given many best practices presentations in the last 10 years. I've given many worst practices presentations in the last 10 years. I've seen some things change over the last ten years and many other things stay exactly the same. In this talk - we'll be taking a look at the good and the bad - what we do right and what we continue to do wrong over and over again. We'll look at why "Why" is probably the right initial answer to most any question. We'll look at how we get to "Know what we Know", and why that can be both a help and a hindrance. We'll peek at "Best Practices" and tie them into what I term "Worst Practices". In short, a talk on the good and the bad. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} http://ioug.itconvergence.com/pls/apex/f?p=207:27:3669516430980563::NO September 12, 2012- New York, NY NYOUG Fall General Meeting “Trends in Database Administration and Why the Future of Database Administration is the Vdba” http://www.nyoug.org/upcoming_events.htm#General_Meeting1 September 21, 2012 – Cleveland, OH Oracle Database 11g for Developers: What You need to know or Oracle Database 11g New Features for Developers Attendees are introduced to the new and improved features of Oracle 11g (both Oracle 11g R1 and Oracle 11g R2) that directly impact application development. Special emphasis is placed on features that reduce development time, make development simpler, improve performance, or speed deployment. Specific topics include: New SQL functions, virtual columns, result caching, XML improvements, pivot statements, JDBC improvements, and PL/SQL enhancements such as compound triggers. http://www.neooug.org/ September 24, 2012 – Ottawa, ON Introduction to Oracle Spatial The free Oracle Locator functionality, and the Oracle Spatial option which dramatically extends Locator, are very useful, but poorly understood capabilities of the database. In the afternoon we will extend into additional areas selected from: storage and performance; answering business problems with spatial queries; using Oracle Maps in OBIEE; an overview and capabilities of Oracle Topology; under the covers with GeoCoding. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} http://www.oug-ottawa.org/pls/htmldb/f?p=327:27:4209274028390246::NO

    Read the article

  • WIF, ADFS 2 and WCF&ndash;Part 2: The Service

    - by Your DisplayName here!
    OK – so let’s first start with a simple WCF service and connect that to ADFS 2 for authentication. The service itself simply echoes back the user’s claims – just so we can make sure it actually works and to see how the ADFS 2 issuance rules emit claims for the service: [ServiceContract(Namespace = "urn:leastprivilege:samples")] public interface IService {     [OperationContract]     List<ViewClaim> GetClaims(); } public class Service : IService {     public List<ViewClaim> GetClaims()     {         var id = Thread.CurrentPrincipal.Identity as IClaimsIdentity;         return (from c in id.Claims                 select new ViewClaim                 {                     ClaimType = c.ClaimType,                     Value = c.Value,                     Issuer = c.Issuer,                     OriginalIssuer = c.OriginalIssuer                 }).ToList();     } } The ViewClaim data contract is simply a DTO that holds the claim information. Next is the WCF configuration – let’s have a look step by step. First I mapped all my http based services to the federation binding. This is achieved by using .NET 4.0’s protocol mapping feature (this can be also done the 3.x way – but in that scenario all services will be federated): <protocolMapping>   <add scheme="http" binding="ws2007FederationHttpBinding" /> </protocolMapping> Next, I provide a standard configuration for the federation binding: <bindings>   <ws2007FederationHttpBinding>     <binding>       <security mode="TransportWithMessageCredential">         <message establishSecurityContext="false">           <issuerMetadata address="https://server/adfs/services/trust/mex" />         </message>       </security>     </binding>   </ws2007FederationHttpBinding> </bindings> This binding points to our ADFS 2 installation metadata endpoint. This is all that is needed for svcutil (aka “Add Service Reference”) to generate the required client configuration. I also chose mixed mode security (SSL + basic message credential) for best performance. This binding also disables session – you can control that via the establishSecurityContext setting on the binding. This has its pros and cons. Something for a separate blog post, I guess. Next, the behavior section adds support for metadata and WIF: <behaviors>   <serviceBehaviors>     <behavior>       <serviceMetadata httpsGetEnabled="true" />       <federatedServiceHostConfiguration />     </behavior>   </serviceBehaviors> </behaviors> The next step is to add the WIF specific configuration (in <microsoft.identityModel />). First we need to specify the key material that we will use to decrypt the incoming tokens. This is optional for web applications but for web services you need to protect the proof key – so this is mandatory (at least for symmetric proof keys, which is the default): <serviceCertificate>   <certificateReference storeLocation="LocalMachine"                         storeName="My"                         x509FindType="FindBySubjectDistinguishedName"                         findValue="CN=Service" /> </serviceCertificate> You also have to specify which incoming tokens you trust. This is accomplished by registering the thumbprint of the signing keys you want to accept. You get this information from the signing certificate configured in ADFS 2: <issuerNameRegistry type="...ConfigurationBasedIssuerNameRegistry">   <trustedIssuers>     <add thumbprint="d1 … db"           name="ADFS" />   </trustedIssuers> </issuerNameRegistry> The last step (promised) is to add the allowed audience URIs to the configuration – WCF clients use (by default – and we’ll come back to this) the endpoint address of the service: <audienceUris>   <add value="https://machine/soapadfs/service.svc" /> </audienceUris> OK – that’s it – now we have a basic WCF service that uses ADFS 2 for authentication. The next step will be to set-up ADFS to issue tokens for this service. Afterwards we can explore various options on how to use this service from a client. Stay tuned… (if you want to have a look at the full source code or peek at the upcoming parts – you can download the complete solution here)

    Read the article

  • Simple show/hide jQuery troubles

    - by Banderdash
    Okay, I feel like a bit of a 800 pound gorilla trying to thread a needle when it comes to jQuery. I need a script that will preform a simple show/hide (preferably with a nice sliding in and out) on a list. My markup looks like this: <div id="themes"> <h2>Research Themes</h2> <ul> <li class="tier_1"><a href="">Learn about our approach to the <strong>environment</strong></a> <ul class="tier_2 hide"> <li><a href=""><em>How we are tying this all together</em></a></li> <li><a href=""><strong>Project:</strong> Solor Powered Biofactories</a></li> <li><a href=""><strong>Project:</strong> Cleaning Water with Nature</a></li> <li><a href=""><strong>Project:</strong> Higher Efficiency Solar Technology</a></li> </ul> </li> <li class="tier_1"><a href="">Learn about our approach to <strong>human health</strong></a> <ul class="tier_2 hide"> <li><a href="">Project name numero uno goes here</a></li> <li><a href="">Project name numero dos goes here</a></li> <li><a href="">Project name numero tres goes here</a></li> </ul> </li> <li class="tier_1"><a href="">Learn about our approach to <strong>national defense</strong></a> <ul class="tier_2 hide"> <li><a href="">Project name numero uno goes here</a></li> <li><a href="">Project name numero dos goes here</a></li> <li><a href="">Project name numero tres goes here</a></li> </ul> </li> </ul> </div><!-- // end themes --> You can see that each nested ul has a class of "tier_2" and "hide". Ideally when the li they are nested within ("li.tier_1") is clicked it's child ul will have the hide class removed and the li's contained within will slideout, but at the same time should check all the other ul.tier_2's and be sure they get a hide class--so only one theme can be expanded at a time. I set up a sandbox to try some things: http://jsbin.com/odete/3 My JS looks like this: $(function(){ $(".tier_1 a").each(function(i,o){ $(this).click(function(e){ e.preventDefault(); $(this).addClass("show").siblings("ul").removeClass("show"); $("ul.tier_2:eq("+i+")").show().siblings("ul.tier_2").hide(); }); }); }); Totally a dumb way to do this, I am sure. But I based it off another script and it does work "a little bit" as you can see in the sandbox. If one of you mean hands at jQuery might be so inclined to take a peek I'd be very grateful. If you could also advise on how to have the transitions slideIn and Out that would also be fantastic!

    Read the article

  • NRF Online Merchandising Workshop: Where Online Retailers Are Focusing for Holiday and Beyond

    - by Rose Spicer-Oracle
    0 0 1 1204 6863 Oracle Corporation 57 16 8051 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Last month we attended the NRF Online Merchandising Workshop in LA, and it was a great opportunity to catch up with our customers, meet new retailers, and hear some great presentations from VF Corporation, Zazzle, Julep Beauty, Backcountry, eBags and more. The one-on-one conversations with Merchants and the keynote presentations carry the same themes across companies of all sizes and across verticals. With only 125 days left (and counting) until Black Friday, these conversations provided some great insight in to what’s top of mind for retailers during the most stressful time of their year, and a sneak peek in to what they will deliver this holiday season.  Some of the most popular topics were: When to start promoting for holiday: seems like a funny conversation to have in July, but a number of retailers said they already had their holiday shopping gift guides live on their site, and it was attracting a significant portion of their onsite traffic. When it comes to timing, most retailers were questioning when to begin their holiday promotions -- carefully balancing when to release pricing and specials, and knowing that customers are holding out for last-minute deals and price drops. Many retailers noted the frustrations around transparent pricing by Amazon and a few other mega-retailers last year, publishing their “lowest prices of the season” as early as October – ensuring shoppers that those prices were the best they could get all season long. Many retailers felt their hands were forced to drop prices. Others kept their set pricing with negative customer reaction, causing some to miss their holiday goals. The pressure is on, and most retailers identified November 1 as their target start date for the holiday promotions blitz. Some are even waiting for the big guys to release their “lowest prices of the season” guides and will then follow suit.      Attribution is tough – and a huge focus: understanding the path to conversion is a tough nut to crack, especially in the new omnichannel world where consumers use multiple touchpoints to make a single purchase, and internal management wants to know hard data. This has lead many retailers to invest in attribution; carefully tracking their online marketing efforts to determine what gets “credit” for the sale, instead of giving credit to the “last click.” Retailers noted that it is very difficult to determine the numbers when online and offline worlds collide – like when a shopper uses digital channels for research and then makes a purchase in a store. As one of the presenters from The North Face mentioned in her keynote, a key to enabling better customer service and satisfaction when it comes to converged online and offline sales is training the in-store staff, and creating a culture where it eventually “doesn’t matter what group gets the credit” if they all add to the sale. No doubt, the area of attribution will be a big area of retail investment in the coming years.      How to plan for the converged world: planning to ensure inventory gets where it needs to be was another concern. In conversations with retailers, we advised them to analyze customer patterns: where shoppers purchase items, where the items were sourced from and even where items are returned. This analysis is very valuable in determining inventory plans. From there, retailers can more accurately plan and allocate inventory to support both the online and offline customer behavior. As we head into the holiday season, the need for accurate enterprise-wide inventory visibility, and providing that information to associates, is even more critical to the brand-wide customer experience.       Improving the search / navigation / usability of the site(s): Aside from some of the big ideas and standard holiday pricing pressure, most conversations we had centered around continuing to improve the basics of the site. Reinvesting in search and navigation came up time and time again (FitForCommerce blogged about what a big topic it was at the event as well). Obviously getting shoppers on their path quickly and allowing them to find what they need fast is critical, but it was definitely interesting to hear just how much effort is still going in to honing the search and navigation experience. Adding new elements to search and navigation like typeahed, inventive navigation refinements, and new navigation categories like gift guides, specialized boutiques and flash sales were top of mind, in addition to searchandising and making search-driven product recommendations. (Oracle can help!)       Reducing cart abandonment: always a hot topic that is top of mind for every online retailer. Getting shoppers to the cart is often less then half the battle; getting them to click “buy” and complete the transaction is much more difficult. While retailers carefully study the checkout process and where shoppers tend to bounce, they know that how they design their checkout page is critical. We’re all online shoppers in our personal lives and we know how frustrating it can be when total prices are not transparent (i.e. shipping, processing, taxes is not included until the very last possible screen before clicking that buy button). Online retailers are struggling with where in the checkout process to surface the total price to be charged to reduce cart abandonment, while not showing the total figure too early in the process that it keeps shoppers from getting to checkout altogether. Recent research shows that providing total pricing prior to the checkout process dramatically reduces cart abandonment – as it serves as a filter to those shopping within a specific price band. Much of the cart abandonment discussion leads us to…       The free shipping / free returns question: it’s no secret that because of Amazon and programs like Prime, consumers expect free shipping, much to the chagrin of the smaller retailer. The reality is that if you’re not a mega-retailer, shipping is an expensive part of doing business that doesn’t allow most retailers to keep their prices low and offer free shipping. This has many retailers venturing out on the “free returns” path, especially in apparel. A number of retailers we spoke with are testing a flat rate shipping fee with free returns to see if they can crack the price threshold where shoppers are willing to pay for shipping with an added service. But, free shipping remains king.      Social ads and retargeting: they are working, but do they turn off consumers? That’s the big question. Every retailer we spoke with during a roundtable on the topic said that social ads and retargeting (where that pair of boots you’re been eyeing on a site magically follows you around the Internet) work and are meeting campaign goals. The larger question many retailers are asking is if this type of tactic is turning off a large number of shoppers, even if these campaigns are meeting their early goals. Retailers also mentioned that Facebook ads are working very well for them, especially when it comes to new customer acquisition, serving as a complimentary a channel to SEO when it comes to engaging new customers. While there are always new things to experiment with in retail, standard challenges are top of mind as retailers scramble to get ready for holiday. It will undoubtedly be another record-breaking online shopping season, but as retailers get more and more advanced with each Black Friday, expect some exciting things. This excitement needs to be backed by sound solutions and optimized operations. Then again, consumers are expecting more than ever, so I don’t doubt that retailers are already thinking about the possibilities of holiday 2015… and beyond. Customers who read this article, also found value in the following stories: Personalization for Retail: http://blogs.oracle.com/retail/entry/personalization_for_retailShop Direct User Experience Focus Drives Sales:https://blogs.oracle.com/retail/entry/shop_direct_user_experience_focusMaking Waves: Australian Online Retailer SurfStitch: https://blogs.oracle.com/oracleretail/entry/surf_stitchWhat’s new in Oracle Commerce v11.1 for RetailWhat the Content+Commerce Equation is Missing

    Read the article

  • Special thanks to everyone that helped me in 2010.

    - by mbcrump
    2010 has been a very good year for me and I wanted to create a list and thank everyone for what they have done for me.  I also wanted to thank everyone for reading and subscribing to my blog. It is hard to believe that people actually want to read what I write. I feel like I owe a huge thanks to everyone listed below. Looking back upon 2010, I feel that I’ve grown as a developer and you are part of that reason. Sometimes we get caught up in day to day work and forget to give thanks to those that helped us along the way. The list below is mine, it includes people and companies. This list is obviously not going to include everyone that has helped, just those that have stood out in my mind. When I think back upon 2010, their names keep popping up in my head. So here goes, in no particular order.  People Dave Campbell – For everything he has done for the Silverlight Community with his Silverlight Cream blog. I can’t think of a better person to get recognition at the Silverlight FireStarter event. I also wanted to thank him for spending several hours of his time helping me track down a bug in my feedburner account. Victor Gaudioso – For his large collection of video tutorials on his blog and the passion and enthusiasm he has for Silverlight. We have talked on the phone and I’ve never met anyone so fired up for Silverlight. Kunal Chowdhury – Kunal has always been available for me to bounce ideas off of. Kunal has also answered a lot of questions that stumped me. His blog and CodeProject article have green a great help to me and the Silverlight Community. Glen Gordon – I was looking frantically for a Windows Phone 7 several months before release and Glen found one for me. This allowed me to start a blog series on the Windows Phone 7 hardware and developing an application from start to finish that Scott Guthrie retweeted.  Jeff Blankenburg – For listening to my complaints in the early stages of Windows Phone 7. Jeff was always very polite and gave me his cell phone number to talk it over. He also walked me through several problems that I was having early on. Pete Brown – For writing Silverlight 4 in Action. This book is definitely a labor of love. I followed Pete on Twitter as he was writing it and he spent a lot of late nights and weekends working on it. I felt a lot smarter after reading it the first time. The second time was even better. John Papa – For all of his work on the Silverlight Firestarter and the Silverlight community in general. He has also helped me on a personal level with several things. Daniel Heisler – For putting up with me the past year while we worked on many .NET projects together in 2010. Alvin Ashcraft – For publishing a daily blog post on the best of .NET links. He has linked to my site many times and I really appreciate what he does for the community. Chris Alcock – For publishing the Morning Brew every weekday. I remember when I first appeared on his site, I started getting hundreds of hits on my site and wondered if I was getting a DOS attack or something. It was great to find out that Chris had linked to one of my articles. Joel Cochran – For spending a week teaching “Blend-O-Rama”. This was my one of my favorite sessions of this year. I learned a lot about Expression Blend from it and the best part was that it was free and during lunchtime. Jeremy Likness – Jeremy is smart – very smart. I have learned a lot from Jeremy over the past year. He is also involved in the Silverlight community in every way possible, from forums to blog post to screencast to open source. It goes on and on. The people that I met at VSLive Orlando 2010. I had a great time chatting with Walt Ritscher, Wallace McClure, Tim Huckabee and David Platt. Also a special thanks to all of my friends on Twitter like @wilhil, @DBVaughan, @DataArtist, @wbm, @DirkStrauss and @rsringeri and many many more. Software Companies / Events / May of gave me FREE stuff. =) Microsoft (3) – I was sent a free coupon code by Microsoft to take the Silverlight 4 Beta Exam. I jumped on the offer and took the exam. It was great being selected to try out the exam before it goes public even though Microsoft eventually published a universal coupon code for everyone. I am still waiting to find out if I passed the exam. My fingers are crossed. Microsoft reaching out to me with some questions regarding the .NET Community. I’ve never had a company contact me with such interest in the community. Having a contest where 75 people could win a $100 gift certificate and a T-Shirt for submitting a Windows Phone 7 app. I submitted my app and won. All of the free launch events this year (Windows Phone 7, Visual Studio 2010, ASP.NET MVC). Wintellect – For providing an awesome day of free technical training called T.E.N. Where else can you get free training from some of the best programmers in the world? I also won a contest from them that included a NETAdvantage Ultimate License from Infragistics. VSLive – I attended the Orlando 2010 Conference and it was the best developer’s conference that I have ever attended. I got to know a lot of people at this conference and hang out with many wonderful speakers. I live tweeted the event and while it may have annoyed some, the organizers of VSLive loved it. I won the contest on Twitter and they invited me back to the 2011 session of my choice. This is a very nice gift and I really appreciate the generosity. BarcodeLib.com – For providing free barcode generating tools for a Non-Profit ASP.NET project that I was working on. Their third party controls really made this a breeze compared to my existing solution. NDepend – It is absolutely the best tool to improve code quality. The product is extremely large and I would recommend heading over to their site to check it out. Silverlight Spy – I was writing a blog post on Silverlight Spy and Koen Zwikstra provided a FREE license to me. If you ever wanted to peek inside of a Silverlight Application then this is the tool for you. He is also working on a version that will support OOB and Windows Phone 7. I would recommend checking out his site. Birmingham .NET Users Group / Silverlight Nights User Group – It takes a lot of time to put together a user group meeting every month yet it always seems to happen. I don’t want to name names for fear of leaving someone out but both of these User Groups are excellent if you live in the Birmingham, Alabama area. Publishing Companies Manning Publishing – For giving me early access to Silverlight 4 in Action by Pete Brown. It was really nice to be able to read this awesome book while Pete was writing it. I was also one of the first people to publish a review of the book. Sams Publishing and DZone – For providing a copy of Silverlight 4 Unleashed by Laurent Bugnion for me to review for their site. The review is coming in January 2011. Special Shoutout to the following 3rd Party Silverlight Controls It has been a great pleasure to work with the following companies on 3rd Party Control Giveaways every month. It always amazes me how every 3rd Party Control company is so eager to help out the community. I’ve never been turned down by any of these companies! These giveaways have sparked a lot of interest in Silverlight and hopefully I can continue giving away a new set every month. If you are a 3rd Party Control company and are interested in participating in these giveaways then please email me at mbcrump29[at]gmail[d0t].com. The companies below have already participated in my giveaways: Infragistics (December 2010) - Win a set of Infragistics Silverlight Controls with Data Visualization!  Mindscape (November 2010) - Mindscape Silverlight Controls + Free Mega Pack Contest Telerik (October 2010) - Win Telerik RadControls for Silverlight! ($799 Value) Again, I just wanted to say Thanks to everyone for helping me grow as a developer.  Subscribe to my feed

    Read the article

  • In Which We Demystify A Few Docupresentment Settings And Learn the Ethos of the Author

    - by Andy Little
    It's no secret that Docupresentment (part of the Oracle Documaker suite) is powerful tool for integrating on-demand and interactive applications for publishing with the Oracle Documaker framework.  It's also no secret there are are many details with respect to the configuration of Docupresentment that can elude even the most erudite of of techies.  To be sure, Docupresentment will work for you right out of the box, and in most cases will suit your needs without toying with a configuration file.  But, where's the adventure in that?   With this inaugural post to That's The Way, I'm going to introduce myself, and what my aim is with this blog.  If you didn't figure it out already by checking out my profile, my name is Andy and I've been with Oracle (nee Skywire Software nee Docucorp nee Formmaker) since the formative years of 1998.  Strangely, it doesn't seem that long ago, but it's certainly a lifetime in the age of technology.  I recall running a BBS from my parent's basement on a 1200 baud modem, and the trepidation and sweaty-palmed excitement of upgrading to the power and speed of 2400 baud!  Fine, I'll admit that perhaps I'm inflating the experience a bit, but I was kid!  This is the stuff of War Games and King's Quest I and the demise of TI-99 4/A.  Exciting times.  So fast-forward a bit and I'm 12 years into a career in the world of document automation and publishing working for the best (IMHO) software company on the planet.  With That's The Way I hope to shed a little light and peek under the covers of some of the more interesting aspects of implementations involving the tech space within the Oracle Insurance Global Business Unit (IGBU), which includes Oracle Documaker, Rating & Underwriting, and Policy Administration to name a few.  I may delve off course a bit, and you'll likely get a dose of humor (at least in my mind) but I hope you'll glean at least a tidbit of usefulness with each post.  Feel free to comment as I'm a fairly conversant guy and happy to talk -- it's stopping the talking that's the hard part... So, back to our regularly-scheduled post, already in progress.  By this time you've visited Oracle's E-Delivery site and acquired your properly-licensed version of Oracle Documaker.  Wait -- you didn't find it?  Understandable -- navigating the voluminous download library within Oracle can be a daunting task.  It's pretty simple once you’ve done it a few times.  Login to the e-delivery site, and accept the license terms and restrictions.  Then, you’ll be able to select the Oracle Insurance Applications product pack and your appropriate platform. Click Go and you’ll see a list of applicable products, and you’ll click on Oracle Documaker Media Pack (as I went to press with this article the version is 11.4): Finally, click the Download button next to Docupresentment (again, version at press time is 2.2 p5). This should give you a ZIP file that contains the installation packages for the Docupresentment Server and Client, cryptically named IDSServer22P05W32.exe and IDSClient22P05W32.exe. At this time, I’d like to take a little detour and explain that the world of Oracle, like most technical companies, is rife with acronyms.  One of the reasons Skywire Software was a appealing to Oracle was our use of many acronyms, including the occasional use of multiple acronyms with the same meaning.  I apologize in advance and will try to point these out along the way.  Here’s your first sticky note to go along with that: IDS = Internet Document Server = Docupresentment Once you’ve completed the installation, you’ll have a shiny new Docupresentment server and client, and if you installed the default location it will be living in c:\docserv. Unix users, I’m one of you!  You’ll find it by default in  ~/docupresentment/docserv.  Forging onward with the meat of this post is learning about some special configuration options.  By now you’ve read the documentation included with the download (specifically ids_book.pdf) which goes into some detail of the rubric of the configuration file and in fact there’s even a handy utility that provides an interface to the configuration file (see Running IDSConfig in the documentation).  But who wants to deal with a configuration utility when we have the tools and technology to edit the file <gasp> by hand! I shall now proceed with the standard Information Technology Under the Hood Disclaimer: Please remember to back up any files before you make changes.  I am not responsible for any havoc you may wreak! Go to your installation directory, and locate your docserv.xml file.  Open it in your favorite XML editor.  I happen to be fond of Notepad++ with the XML Tools plugin.  Almost immediately you will behold the splendor of the configuration file.  Just take a moment and let that sink in.  Ok – moving on.  If you reviewed the documentation you know that inside the root <configuration> node there are multiple <section> nodes, each containing a specific group of settings.  Let’s take a look at <section name=”DocumentServer”>: There are a few entries I’d like to discuss.  First, <entry name=”StartCommand”>. This should be pretty self-explanatory; it’s the name of the executable that’s run when you fire up Docupresentment.  Immediately following that is <entry name=”StartArguments”> and as you might imagine these are the arguments passed to the executable.  A few things to point out: The –Dids.configuration=docserv.xml parameter specifies the name of your configuration file. The –Dlogging.configuration=logconf.xml parameter specifies the name of your logging configuration file (this uses log4j so bone up on that before you delve here). The -Djava.endorsed.dirs=lib/endorsed parameter specifies the path where 3rd party Java libraries can be located for use with Docupresentment.  More on that in another post. The <entry name=”Instances”> allows you to specify the number of instances of Docupresentment that will be started.  By default this is two, and generally two instances per CPU is adequate, however you will always need to perform load testing to determine the sweet spot based on your hardware and types of transactions.  You may have many, many more instances than 2. Time for a sidebar on instances.  An instance is nothing more than a separate process of Docupresentment.  The Docupresentment service that you fire up with docserver.bat or docserver.sh actually starts a watchdog process, which is then responsible for starting up the actual Docupresentment processes.  Each of these act independently from one another, so if one crashes, it does not affect any others.  In the case of a crashed process, the watchdog will start up another instance so the number of configured instances are always running.  Bottom line: instance = Docupresentment process. And now, finally, to the settings which gave me pause on an not-too-long-ago implementation!  Docupresentment includes a feature that watches configuration files (such as docserv.xml and logconf.xml) and will automatically restart its instances to load the changes.  You can configure the time that Docupresentment waits to check these files using the setting <entry name=”FileWatchTimeMillis”>.  By default the number is 12000ms, or 12 seconds.  You can save yourself a few CPU cycles by extending this time, or by disabling  the check altogether by setting the value to 0.  This may or may not be appropriate for your environment; if you have 100% uptime requirements then you probably don’t want to bring down an entire set of processes just to accept a new configuration value, so it’s best to leave this somewhere between 12 seconds to a few minutes.  Another point to keep in mind: if you are using Documaker real-time processing under Docupresentment the Master Resource Library (MRL) files and INI options are cached, and if you need to affect a change, you’ll have to “restart” Docupresentment.  Touching the docserv.xml file is an easy way to do this (other methods including using the RSS request, but that’s another post). The next item up: <entry name=”FilePurgeTimeSeconds”>.  You may already know that the Docupresentment system can generate many temporary files based on certain request types that are processed through the system.  What you may not know is how those files are cleaned up.  There are many rules in Docupresentment that cause the creation of temporary files.  When these files are created, Docupresentment writes an entry into a properties file called the file cache.  This file contains the name, creation date, and expiration time of each temporary file created by each instance of Docupresentment.  Periodically Docupresentment will check the file cache to determine if there are files that are past the expiration time, not unlike that block of cheese festering away in the back of my refrigerator.  However, unlike my ‘fridge cleaning tendencies, Docupresentment is quick to remove files that are past their expiration time.  You, my friend, have the power to control how often Docupresentment inspects the file cache.  Simply set the value for <entry name=”FilePurgeTimeSeconds”> to the number of seconds appropriate for your requirements and you’re set.  Note that file purging happens on a separate thread from normal request processing, so this shouldn’t interfere with response times unless the CPU happens to be really taxed at the point of cache processing.  Finally, after all of this, we get to the final setting I’m going to address in this post: <entry name=”FilePurgeList”>.  The default is “filecache.properties”.  This establishes the root name for the Docupresentment file cache that I mentioned previously.  Docupresentment creates a separate cache file for each instance based on this setting.  If you have two instances, you’ll see two files created: filecache.properties.1 and filecache.properties.2.  Feel free to open these up and check them out. I hope you’ve enjoyed this first foray into the configuration file of Docupresentment.  If you did enjoy it, feel free to drop a comment, I welcome feedback.  If you have ideas for other posts you’d like to see, please do let me know.  You can reach me at [email protected]. ‘Til next time! ###

    Read the article

  • Why does an error appear every time I try to open the Ubuntu Software Center? [duplicate]

    - by askubuntu7639
    This question already has an answer here: How do I remove a broken software source? 3 answers There is a glitch on the Ubuntu Software Center and whenever I open it an error appears and it keeps loading and never opens. Why does this happen? I have installed Ubuntu 13.04 on a disk and partitioned it. Please help me and ask for excess information if you need it. If you know of any duplicates please show me them!! This is the output of a question someone asked me. SystemError: E:Type '<!DOCTYPE' is not known on line 1 in source list /etc/apt/sources.list.d/medibuntu.list This next output is the output of cat /etc/apt/sources.list.d/medibuntu.list </div> <div style="float:left;"> <div class="textwidget"><script type="text/javascript"><!-- google_ad_client = "ca-pub-2917661377128354"; /* 160X600 Sidebar UX */ google_ad_slot = "9908287444"; google_ad_width = 160; google_ad_height = 600; //-- Recent Comments <article> <div style="float:left; display:block; margin:0 10px 10px 0; border:1px solid #CCCCCC; padding:3px; width:35px; height:35px;"><img alt='' src='http://0.gravatar.com/avatar/ae5f4503d5f167f1cf62d3e36e8242b6?s=35&amp;d=&amp;r=G' class='avatar avatar-35 photo' height='35' width='35' /></div> <div style="float:left;"> <h4 class="author">Richard Syme</h4> <p class="meta"> <time datetime="2013-09-24" pubdate>September 24, 2013</time> | <a class="permalink" href="http://www.unixmen.com/how-to-customize-you-vlc-hot-keys/#comment-13732">#</a> </p> </div> <div class="content" style="float:left;"><p>I dont have a clear button under the hotkeys. All i want to do is get rid of all hotkeys.</p> </article> <article> <div style="float:left; display:block; margin:0 10px 10px 0; border:1px solid #CCCCCC; padding:3px; width:35px; height:35px;"><img alt='' src='http://1.gravatar.com/avatar/ffabde94437e996a506e31e981bcf8fc?s=35&amp;d=&amp;r=G' class='avatar avatar-35 photo' height='35' width='35' /></div> <div style="float:left;"> <h4 class="author">Abin Thomas Mathew</h4> <p class="meta"> <time datetime="2013-09-24" pubdate>September 24, 2013</time> | <a class="permalink" href="http://www.unixmen.com/install-lamp-server-in-centos-6-4-rhel-6-4/#comment-13727">#</a> </p> </div> <div class="content" style="float:left;"><p>Simple and easy to follow tutorial to install and start of phpMyAdmin. Thank you</p> </article> <article> <div style="float:left; display:block; margin:0 10px 10px 0; border:1px solid #CCCCCC; padding:3px; width:35px; height:35px;"><img alt='' src='http://0.gravatar.com/avatar/499ccc1154e9b8569b87413434220b91?s=35&amp;d=&amp;r=G' class='avatar avatar-35 photo' height='35' width='35' /></div> <div style="float:left;"> <h4 class="author">SK</h4> <p class="meta"> <time datetime="2013-09-24" pubdate>September 24, 2013</time> | <a class="permalink" href="http://www.unixmen.com/munich-giving-ubuntu-linux-cds-citizens/#comment-13725">#</a> </p> </div> <div class="content" style="float:left;"><p>I have Bosslinux and i used it for a while. Now i swiched to Ubuntu 13.04.</p> </article> <article> <div style="float:left; display:block; margin:0 10px 10px 0; border:1px solid #CCCCCC; padding:3px; width:35px; height:35px;"><img alt='' src='http://1.gravatar.com/avatar/3dc2f7140bdd857dcdfe815a6e29aa6b?s=35&amp;d=&amp;r=G' class='avatar avatar-35 photo' height='35' width='35' /></div> <div style="float:left;"> <h4 class="author">Anon</h4> <p class="meta"> <time datetime="2013-09-24" pubdate>September 24, 2013</time> | <a class="permalink" href="http://www.unixmen.com/linus-torvalds-talks-backdoor-linuxcon/#comment-13724">#</a> </p> </div> <div class="content" style="float:left;"><p>Do you know how much extra bloat is in Ubuntu these days? How the hell does anyone really know?</p> </article> <article> <div style="float:left; display:block; margin:0 10px 10px 0; border:1px solid #CCCCCC; padding:3px; width:35px; height:35px;"><img alt='' src='http://1.gravatar.com/avatar/9dd28d1cf5efe754fa58b53c1e6de401?s=35&amp;d=&amp;r=G' class='avatar avatar-35 photo' height='35' width='35' /></div> <div style="float:left;"> <h4 class="author"><a href="http://ambitiousgeeks.blogspot.com/" onclick="javascript:_gaq.push(['_trackEvent','outbound-commentauthor','http://ambitiousgeeks.blogspot.com']);" rel='external nofollow' class='url'>Ambition</a></h4> <p class="meta"> <time datetime="2013-09-24" pubdate>September 24, 2013</time> | <a class="permalink" href="http://www.unixmen.com/linus-torvalds-talks-backdoor-linuxcon/#comment-13723">#</a> </p> </div> <div class="content" style="float:left;"><p>True :)</p> </article> </div> <div style="float:left;"> &nbsp;<script type="text/javascript"> window.___gcfg = {lang: 'en-US'}; (function() {var po = document.createElement("script"); po.type = "text/javascript"; po.async = true;po.src = "https://apis.google.com/js/plusone.js"; var s = document.getElementsByTagName("script")[0]; s.parentNode.insertBefore(po, s); })(); <div class="execphpwidget"></div> </div> <div class="module2"> <div class="recentPost"> <h3 class="module-title2">Favorite Links</h3> <ul class='xoxo blogroll'> http://www.iticy.com']);"Cheap Hosting http://www.tuxmachines.org']);"TuxMachines.org http://www.ubuntugeek.com']);"UbuntuGeek.com http://www.stelinuxhost.com']);"Webdesign & SEO </ul> <img src="http://180016988.r.cdn77.net/wp-content/themes/unimax/images/bigLine.jpg" alt="" /> </div> </div> <div align="center" style="min-height:610px;"> <div class="execphpwidget"></div> <div class="textwidget"><a rel="license" href="http://creativecommons.org/licenses/by-nc-nd/3.0/deed.en_US" onclick="javascript:_gaq.push(['_trackEvent','outbound-widget','http://creativecommons.org']);"><img alt="Creative Commons License" style="border-width:0" src="http://i.creativecommons.org/l/by-nc-nd/3.0/88x31.png" /></a><br />This work by <a xmlns:cc="http://creativecommons.org/ns#" href="unixmen.com" property="cc:attributionName" rel="cc:attributionURL">unixmen.com</a> is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-nd/3.0/deed.en_US" >Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License</a>.</div> </div> </div> <!-- #primary .widget-area --> </div> Unixmen Archive Select Month September 2013 August 2013 July 2013 June 2013 May 2013 April 2013 March 2013 February 2013 January 2013 December 2012 November 2012 October 2012 September 2012 August 2012 July 2012 June 2012 May 2012 April 2012 March 2012 February 2012 January 2012 December 2011 November 2011 October 2011 September 2011 August 2011 July 2011 June 2011 May 2011 April 2011 March 2011 February 2011 January 2011 December 2010 November 2010 October 2010 September 2010 August 2010 July 2010 June 2010 May 2010 April 2010 March 2010 February 2010 January 2010 December 2009 November 2009 October 2009 September 2009 August 2009 July 2009 June 2009 May 2009 April 2009 March 2009 February 2009 January 2009 December 2008 Tags Cloudandroid apache browser Centos chrome command line Debian eyecandy Fedora firefox games gaming gnome google karmic koala kde libreoffice Linux linux distribution LinuxMint lucid lynx maverick meerkat mysql news oneiric ocelot openoffice opensource opensuse oracle ppa Precise Pangolin release RHEL security server software themes tools ubuntu unix upgrade virtualbox vlc windows wine Unixmen Twitts Firefox 16, a treat for developers http://t.co/cnd27CzT Ubuntu 12.10 ‘Quantal Quetzal’: Beta 2 Sneak Peek http://t.co/hd4LwDOy Top 5 security Myths about Linux; and their realities http://t.co/zO1LgHST About Us Advertising Sitemap Privacy Contact Us Hire Us Copyright © 2008-2013 Unixmen.com . Maintained by Unixmen . /* */ jQuery(document).on('ready post-load', easy_fancybox_handler ); http://www.w3-edge.com/wordpress-plugins/ Page Caching using apc Database Caching 3/186 queries in 0.035 seconds using apc Content Delivery Network via 180016988.r.cdn77.net Served from: www.unixmen.com @ 2013-09-25 01:38:14 by W3 Total Cache

    Read the article

  • 8-Puzzle Solution executes infinitely [migrated]

    - by Ashwin
    I am looking for a solution to 8-puzzle problem using the A* Algorithm. I found this project on the internet. Please see the files - proj1 and EightPuzzle. The proj1 contains the entry point for the program(the main() function) and EightPuzzle describes a particular state of the puzzle. Each state is an object of the 8-puzzle. I feel that there is nothing wrong in the logic. But it loops forever for these two inputs that I have tried : {8,2,7,5,1,6,3,0,4} and {3,1,6,8,4,5,7,2,0}. Both of them are valid input states. What is wrong with the code? Note For better viewing copy the code in a Notepad++ or some other text editor(which has the capability to recognize java source file) because there are lot of comments in the code. Since A* requires a heuristic, they have provided the option of using manhattan distance and a heuristic that calculates the number of misplaced tiles. And to ensure that the best heuristic is executed first, they have implemented a PriorityQueue. The compareTo() function is implemented in the EightPuzzle class. The input to the program can be changed by changing the value of p1d in the main() function of proj1 class. The reason I am telling that there exists solution for the two my above inputs is because the applet here solves them. Please ensure that you select 8-puzzle from teh options in the applet. EDITI gave this input {0,5,7,6,8,1,2,4,3}. It took about 10 seconds and gave a result with 26 moves. But the applet gave a result with 24 moves in 0.0001 seconds with A*. For quick reference I have pasted the the two classes without the comments : EightPuzzle import java.util.*; public class EightPuzzle implements Comparable <Object> { int[] puzzle = new int[9]; int h_n= 0; int hueristic_type = 0; int g_n = 0; int f_n = 0; EightPuzzle parent = null; public EightPuzzle(int[] p, int h_type, int cost) { this.puzzle = p; this.hueristic_type = h_type; this.h_n = (h_type == 1) ? h1(p) : h2(p); this.g_n = cost; this.f_n = h_n + g_n; } public int getF_n() { return f_n; } public void setParent(EightPuzzle input) { this.parent = input; } public EightPuzzle getParent() { return this.parent; } public int inversions() { /* * Definition: For any other configuration besides the goal, * whenever a tile with a greater number on it precedes a * tile with a smaller number, the two tiles are said to be inverted */ int inversion = 0; for(int i = 0; i < this.puzzle.length; i++ ) { for(int j = 0; j < i; j++) { if(this.puzzle[i] != 0 && this.puzzle[j] != 0) { if(this.puzzle[i] < this.puzzle[j]) inversion++; } } } return inversion; } public int h1(int[] list) // h1 = the number of misplaced tiles { int gn = 0; for(int i = 0; i < list.length; i++) { if(list[i] != i && list[i] != 0) gn++; } return gn; } public LinkedList<EightPuzzle> getChildren() { LinkedList<EightPuzzle> children = new LinkedList<EightPuzzle>(); int loc = 0; int temparray[] = new int[this.puzzle.length]; EightPuzzle rightP, upP, downP, leftP; while(this.puzzle[loc] != 0) { loc++; } if(loc % 3 == 0){ temparray = this.puzzle.clone(); temparray[loc] = temparray[loc + 1]; temparray[loc + 1] = 0; rightP = new EightPuzzle(temparray, this.hueristic_type, this.g_n + 1); rightP.setParent(this); children.add(rightP); }else if(loc % 3 == 1){ //add one child swaps with right temparray = this.puzzle.clone(); temparray[loc] = temparray[loc + 1]; temparray[loc + 1] = 0; rightP = new EightPuzzle(temparray, this.hueristic_type, this.g_n + 1); rightP.setParent(this); children.add(rightP); //add one child swaps with left temparray = this.puzzle.clone(); temparray[loc] = temparray[loc - 1]; temparray[loc - 1] = 0; leftP = new EightPuzzle(temparray, this.hueristic_type, this.g_n + 1); leftP.setParent(this); children.add(leftP); }else if(loc % 3 == 2){ // add one child swaps with left temparray = this.puzzle.clone(); temparray[loc] = temparray[loc - 1]; temparray[loc - 1] = 0; leftP = new EightPuzzle(temparray, this.hueristic_type, this.g_n + 1); leftP.setParent(this); children.add(leftP); } if(loc / 3 == 0){ //add one child swaps with lower temparray = this.puzzle.clone(); temparray[loc] = temparray[loc + 3]; temparray[loc + 3] = 0; downP = new EightPuzzle(temparray, this.hueristic_type, this.g_n + 1); downP.setParent(this); children.add(downP); }else if(loc / 3 == 1 ){ //add one child, swap with upper temparray = this.puzzle.clone(); temparray[loc] = temparray[loc - 3]; temparray[loc - 3] = 0; upP = new EightPuzzle(temparray, this.hueristic_type, this.g_n + 1); upP.setParent(this); children.add(upP); //add one child, swap with lower temparray = this.puzzle.clone(); temparray[loc] = temparray[loc + 3]; temparray[loc + 3] = 0; downP = new EightPuzzle(temparray, this.hueristic_type, this.g_n + 1); downP.setParent(this); children.add(downP); }else if (loc / 3 == 2 ){ //add one child, swap with upper temparray = this.puzzle.clone(); temparray[loc] = temparray[loc - 3]; temparray[loc - 3] = 0; upP = new EightPuzzle(temparray, this.hueristic_type, this.g_n + 1); upP.setParent(this); children.add(upP); } return children; } public int h2(int[] list) // h2 = the sum of the distances of the tiles from their goal positions // for each item find its goal position // calculate how many positions it needs to move to get into that position { int gn = 0; int row = 0; int col = 0; for(int i = 0; i < list.length; i++) { if(list[i] != 0) { row = list[i] / 3; col = list[i] % 3; row = Math.abs(row - (i / 3)); col = Math.abs(col - (i % 3)); gn += row; gn += col; } } return gn; } public String toString() { String x = ""; for(int i = 0; i < this.puzzle.length; i++){ x += puzzle[i] + " "; if((i + 1) % 3 == 0) x += "\n"; } return x; } public int compareTo(Object input) { if (this.f_n < ((EightPuzzle) input).getF_n()) return -1; else if (this.f_n > ((EightPuzzle) input).getF_n()) return 1; return 0; } public boolean equals(EightPuzzle test){ if(this.f_n != test.getF_n()) return false; for(int i = 0 ; i < this.puzzle.length; i++) { if(this.puzzle[i] != test.puzzle[i]) return false; } return true; } public boolean mapEquals(EightPuzzle test){ for(int i = 0 ; i < this.puzzle.length; i++) { if(this.puzzle[i] != test.puzzle[i]) return false; } return true; } } proj1 import java.util.*; public class proj1 { /** * @param args */ public static void main(String[] args) { int[] p1d = {1, 4, 2, 3, 0, 5, 6, 7, 8}; int hueristic = 2; EightPuzzle start = new EightPuzzle(p1d, hueristic, 0); int[] win = { 0, 1, 2, 3, 4, 5, 6, 7, 8}; EightPuzzle goal = new EightPuzzle(win, hueristic, 0); astar(start, goal); } public static void astar(EightPuzzle start, EightPuzzle goal) { if(start.inversions() % 2 == 1) { System.out.println("Unsolvable"); return; } // function A*(start,goal) // closedset := the empty set // The set of nodes already evaluated. LinkedList<EightPuzzle> closedset = new LinkedList<EightPuzzle>(); // openset := set containing the initial node // The set of tentative nodes to be evaluated. priority queue PriorityQueue<EightPuzzle> openset = new PriorityQueue<EightPuzzle>(); openset.add(start); while(openset.size() > 0){ // x := the node in openset having the lowest f_score[] value EightPuzzle x = openset.peek(); // if x = goal if(x.mapEquals(goal)) { // return reconstruct_path(came_from, came_from[goal]) Stack<EightPuzzle> toDisplay = reconstruct(x); System.out.println("Printing solution... "); System.out.println(start.toString()); print(toDisplay); return; } // remove x from openset // add x to closedset closedset.add(openset.poll()); LinkedList <EightPuzzle> neighbor = x.getChildren(); // foreach y in neighbor_nodes(x) while(neighbor.size() > 0) { EightPuzzle y = neighbor.removeFirst(); // if y in closedset if(closedset.contains(y)){ // continue continue; } // tentative_g_score := g_score[x] + dist_between(x,y) // // if y not in openset if(!closedset.contains(y)){ // add y to openset openset.add(y); // } // } // } } public static void print(Stack<EightPuzzle> x) { while(!x.isEmpty()) { EightPuzzle temp = x.pop(); System.out.println(temp.toString()); } } public static Stack<EightPuzzle> reconstruct(EightPuzzle winner) { Stack<EightPuzzle> correctOutput = new Stack<EightPuzzle>(); while(winner.getParent() != null) { correctOutput.add(winner); winner = winner.getParent(); } return correctOutput; } }

    Read the article

  • CodePlex Daily Summary for Sunday, June 09, 2013

    CodePlex Daily Summary for Sunday, June 09, 2013Popular ReleasesZXMAK2: Version 2.7.5.5: - several fixes for joystick scanVG-Ripper & PG-Ripper: PG-Ripper 1.4.13: changes NEW: Added Support for "ImageJumbo.com" links FIXED: Ripping of Threads with multiple pagesCKEditor™ Provider for DotNetNuke®: CKEditor Provider 2.00.05: Whats New Updated to CKEditor 4.1.1 Added Auto Save Function (autosave plugin) {Delay can be defined in the Config - Default is 25} New Setting to set the Default Link Type (Editor Config Tab) Added CodeMirror Plugin Settings to the Editor Config Tab Added WordCount Plugin Settings to the Editor Config Tab Added Maximum Upload File Size Info to the Upload Dialog Added Check for Maximum Upload Size on Quick Upload and File Browser Upload changes File-Browser: Fixed an Issue with S...Property Framework: Property Framework (binaries) Latest: Latest stable 6/8/2013xFunc: xFunc (2.2.0.0): Added: user functions;PHP Vulnerability Hunter: PHP Vulnerability Hunter 1.4.0.20 Alpha: PHP Vulnerability Hunter 1.4.0.20 AlphaXomega Framework: Xomega.Framework 1.4: Adding support for Visual Studio 2012 and .Net framework 4.5. Minor bug fixes and enhancements.sb0t v.5: sb0t 5.14: Stability fix in script engine. Avatar.exists property fixed in scripting. cb0t custom font protocol re-added and updated to support new Ares.ASP.NET MVC Forum: MVCForum v1.3.5: This is a bug release version, with a couple of small usability features and UI changes. All the small amount of bugs reported in v1.3 have been fixed, no upgrade needed just overwrite the files and everything should just work.Json.NET: Json.NET 5.0 Release 6: New feature - Added serialized/deserialized JSON to verbose tracing New feature - Added support for using type name handling with ISerializable content Fix - Fixed not using default serializer settings with primitive values and JToken.ToObject Fix - Fixed error writing BigIntegers with JsonWriter.WriteToken Fix - Fixed serializing and deserializing flag enums with EnumMember attribute Fix - Fixed error deserializing interfaces with a valid type converter Fix - Fixed error deser...Christoc's DotNetNuke Module Development Template: DotNetNuke 7 Project Templates V2.3 for VS2012: V2.3 - Release Date 6/5/2013 Items addressed in this 2.3 release Fixed bad namespace for BusinessController in one of the C# templates. Updated documentation in all templates. Setting up your DotNetNuke Module Development Environment Installing Christoc's DotNetNuke Module Development Templates Customizing the latest DotNetNuke Module Development Project TemplatesPulse: Pulse 0.6.7.0: A number of small bug fixes to stabilize the previous Beta. Sorry about the never ending "New Version" bug!QlikView Extension - Animated Scatter Chart: Animated Scatter Chart - v1.0: Version 1.0 including Source Code qar File Example QlikView application Tested With: Browser Firefox 20 (x64) Google Chrome 27 (x64) Internet Explorer 9 QlikView QlikView Desktop 11 - SR2 (x64) QlikView Desktop 11.2 - SR1 (x64) QlikView Ajax Client 11.2 - SR2 (based on x64)BarbaTunnel: BarbaTunnel 7.2: Warning: HTTP Tunnel is not compatible with version 6.x and prior, HTTP packet format has been changed. Check Version History for more information about this release.SuperWebSocket, a .NET WebSocket Server: SuperWebSocket 0.8: This release includes these changes below: Upgrade SuperSocket to 1.5.3 which is much more stable Added handshake request validating api (WebSocketServer.ValidateHandshake(TWebSocketSession session, string origin)) Fixed a bug that the m_Filters in the SubCommandBase can be null if the command's method LoadSubCommandFilters(IEnumerable<SubCommandFilterAttribute> globalFilters) is not invoked Fixed the compatibility issue on Origin getting in the different version protocols Marked ISub...BlackJumboDog: Ver5.9.0: 2013.06.04 Ver5.9.0 (1) ?????????????????????????????????($Remote.ini Tmp.ini) (2) ThreadBaseTest?? (3) ????POP3??????SMTP???????????????? (4) Web???????、?????????URL??????????????? (5) Ftp???????、LIST?????????????? (6) ?????????????????????Media Companion: Media Companion MC3.569b: New* Movies - Autoscrape/Batch Rescrape extra fanart and or extra thumbs. * Movies - Alternative editor can add manually actors. * TV - Batch Rescraper, AutoScrape extrafanart, if option enabled. Fixed* Movies - Slow performance switching to movie tab by adding option 'Disable "Not Matching Rename Pattern"' to Movie Preferences - General. * Movies - Fixed only actors with images were scraped and added to nfo * Movies - Fixed filter reset if selected tab was above Home Movies. * Updated Medi...Nearforums - ASP.NET MVC forum engine: Nearforums v9.0: Version 9.0 of Nearforums with great new features for users and developers: SQL Azure support Admin UI for Forum Categories Avoid html validation for certain roles Improve profile picture moderation and support Warn, suspend, and ban users Web administration of site settings Extensions support Visit the Roadmap for more details. Webdeploy package sha1 checksum: 9.0.0.0: e687ee0438cd2b1df1d3e95ecb9d66e7c538293b Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.93: Added -esc:BOOL switch (CodeSettings.AlwaysEscapeNonAscii property) to always force non-ASCII character (ch > 0x7f) to be escaped as the JavaScript \uXXXX sequence. This switch should be used if creating a Symbol Map and outputting the result to the a text encoding other than UTF-8 or UTF-16 (ASCII, for instance). Fixed a bug where a complex comma operation is the operand of a return statement, and it was looking at the wrong variable for possible optimization of = to just .Document.Editor: 2013.22: What's new for Document.Editor 2013.22: Improved Bullet List support Improved Number List support Minor Bug Fix's, improvements and speed upsNew ProjectsAcer 1420p Leaky Handle Fix: Fixes leaking handles on the Acer 1420p laptop given out at PDC09.Akismet Spam Filter for Community Server 2008.5: Akismet Spam Filter for Community Server 2008.5 Atom Timer: Atom Timer is a thread based time that allows schedules to be created using events.BRICK CMS: These Days,I am tired to listen that: .NET is going down and JAVA/Ruby/Python will replace it. yes,they have been growing up while .NET's going down. do or die?DataTestFramework: ???????&ORM??????Date/Time Interval: The Date Time Interval allows for different types of interval to be created. The class will enumerate the defined interval support LINQ statements. More informaDimas.Net: .net infrastructure to create a web/service server from scratch. it includes n-tier , log , policy injection , mapper , MVC best prictice and etc.Gannet: Gannet is an operating system for us (the target developers) to learn about how an Operating System is put together and what components are needed.Image Resize For Android: Android????????????LightBlog: LightBlog?????Node.js,Express??,Mongodb???markdown??????????Memory: Live artistic interaction using KinectNestedHtmlWriter: This is a helper class library for writing simple HTML document, by using statement in C#.Operation Sneak Peek: Windows Phone game that includes stealth+logic gameplay. Player has to look for hidden letters to discover a secret word and use it to defuse a bomb.Orchard DarkStripes Theme: Orchard theme based on Octopress DarkStripesPath copy from context menu: ????????????????????????????Phantomas: mouhouhahahahaSE1: NO SUMMARY ! SiteLinks DNN Module: The SiteLinks DNN module is a module for displaying a list of existing links on your DNN website. This module works in similarly to the DNN "Links" skin object.sql to object maping: SqlString CodeMapTCP/IP Communication Framework: TCP/IP Communication Framework (TCP/IP CF) is a library that wraps the .NET Socket class and defines several classes for developing communication applications..UTorrentClient Api: UTorrentClient Api is an extensible set of classes that use WebUI to manipulate µTorrent remotely.Visual Studio Spell Checker: A Visual Studio editor extension that checks the spelling of comments, strings, and plain text as you type. Supports configuration and various languages.zjsru_xyw: this is a test projectZTrans: ztrans is language for embedded software development???: test?????????: ??????????? ????:VS2012+SQL2012 ????:ASP.NET(.NET 4.0) ????:MVC3+EF5 ????: ?????,??,?? ???????,??,?? ????????,??,?? ????DIV+CSS?? Jquery??1.6.4 ??Ajax??????

    Read the article

  • The Execute SQL Task

    In this article we are going to take you through the Execute SQL Task in SQL Server Integration Services for SQL Server 2005 (although it appies just as well to SQL Server 2008).  We will be covering all the essentials that you will need to know to effectively use this task and make it as flexible as possible. The things we will be looking at are as follows: A tour of the Task. The properties of the Task. After looking at these introductory topics we will then get into some examples. The examples will show different types of usage for the task: Returning a single value from a SQL query with two input parameters. Returning a rowset from a SQL query. Executing a stored procedure and retrieveing a rowset, a return value, an output parameter value and passing in an input parameter. Passing in the SQL Statement from a variable. Passing in the SQL Statement from a file. Tour Of The Task Before we can start to use the Execute SQL Task in our packages we are going to need to locate it in the toolbox. Let's do that now. Whilst in the Control Flow section of the package expand your toolbox and locate the Execute SQL Task. Below is how we found ours. Now drag the task onto the designer. As you can see from the following image we have a validation error appear telling us that no connection manager has been assigned to the task. This can be easily remedied by creating a connection manager. There are certain types of connection manager that are compatable with this task so we cannot just create any connection manager and these are detailed in a few graphics time. Double click on the task itself to take a look at the custom user interface provided to us for this task. The task will open on the general tab as shown below. Take a bit of time to have a look around here as throughout this article we will be revisting this page many times. Whilst on the general tab, drop down the combobox next to the ConnectionType property. In here you will see the types of connection manager which this task will accept. As with SQL Server 2000 DTS, SSIS allows you to output values from this task in a number of formats. Have a look at the combobox next to the Resultset property. The major difference here is the ability to output into XML. If you drop down the combobox next to the SQLSourceType property you will see the ways in which you can pass a SQL Statement into the task itself. We will have examples of each of these later on but certainly when we saw these for the first time we were very excited. Next to the SQLStatement property if you click in the empty box next to it you will see ellipses appear. Click on them and you will see the very basic query editor that becomes available to you. Alternatively after you have specified a connection manager for the task you can click on the Build Query button to bring up a completely different query editor. This is slightly inconsistent. Once you've finished looking around the general tab, move on to the next tab which is the parameter mapping tab. We shall, again, be visiting this tab throughout the article but to give you an initial heads up this is where you define the input, output and return values from your task. Note this is not where you specify the resultset. If however you now move on to the ResultSet tab this is where you define what variable will receive the output from your SQL Statement in whatever form that is. Property Expressions are one of the most amazing things to happen in SSIS and they will not be covered here as they deserve a whole article to themselves. Watch out for this as their usefulness will astound you. For a more detailed discussion of what should be the parameter markers in the SQL Statements on the General tab and how to map them to variables on the Parameter Mapping tab see Working with Parameters and Return Codes in the Execute SQL Task. Task Properties There are two places where you can specify the properties for your task. One is in the task UI itself and the other is in the property pane which will appear if you right click on your task and select Properties from the context menu. We will be doing plenty of property setting in the UI later so let's take a moment to have a look at the property pane. Below is a graphic showing our properties pane. Now we shall take you through all the properties and tell you exactly what they mean. A lot of these properties you will see across all tasks as well as the package because of everything's base structure The Container. BypassPrepare Should the statement be prepared before sending to the connection manager destination (True/False) Connection This is simply the name of the connection manager that the task will use. We can get this from the connection manager tray at the bottom of the package. DelayValidation Really interesting property and it tells the task to not validate until it actually executes. A usage for this may be that you are operating on table yet to be created but at runtime you know the table will be there. Description Very simply the description of your Task. Disable Should the task be enabled or not? You can also set this through a context menu by right clicking on the task itself. DisableEventHandlers As a result of events that happen in the task, should the event handlers for the container fire? ExecValueVariable The variable assigned here will get or set the execution value of the task. Expressions Expressions as we mentioned earlier are a really powerful tool in SSIS and this graphic below shows us a small peek of what you can do. We select a property on the left and assign an expression to the value of that property on the right causing the value to be dynamically changed at runtime. One of the most obvious uses of this is that the property value can be built dynamically from within the package allowing you a great deal of flexibility FailPackageOnFailure If this task fails does the package? FailParentOnFailure If this task fails does the parent container? A task can he hosted inside another container i.e. the For Each Loop Container and this would then be the parent. ForcedExecutionValue This property allows you to hard code an execution value for the task. ForcedExecutionValueType What is the datatype of the ForcedExecutionValue? ForceExecutionResult Force the task to return a certain execution result. This could then be used by the workflow constraints. Possible values are None, Success, Failure and Completion. ForceExecutionValue Should we force the execution result? IsolationLevel This is the transaction isolation level of the task. IsStoredProcedure Certain optimisations are made by the task if it knows that the query is a Stored Procedure invocation. The docs say this will always be false unless the connection is an ADO connection. LocaleID Gets or sets the LocaleID of the container. LoggingMode Should we log for this container and what settings should we use? The value choices are UseParentSetting, Enabled and Disabled. MaximumErrorCount How many times can the task fail before we call it a day? Name Very simply the name of the task. ResultSetType How do you want the results of your query returned? The choices are ResultSetType_None, ResultSetType_SingleRow, ResultSetType_Rowset and ResultSetType_XML. SqlStatementSource Your Query/SQL Statement. SqlStatementSourceType The method of specifying the query. Your choices here are DirectInput, FileConnection and Variables TimeOut How long should the task wait to receive results? TransactionOption How should the task handle being asked to join a transaction? Usage Examples As we move through the examples we will only cover in them what we think you must know and what we think you should see. This means that some of the more elementary steps like setting up variables will be covered in the early examples but skipped and simply referred to in later ones. All these examples used the AventureWorks database that comes with SQL Server 2005. Returning a Single Value, Passing in Two Input Parameters So the first thing we are going to do is add some variables to our package. The graphic below shows us those variables having been defined. Here the CountOfEmployees variable will be used as the output from the query and EndDate and StartDate will be used as input parameters. As you can see all these variables have been scoped to the package. Scoping allows us to have domains for variables. Each container has a scope and remember a package is a container as well. Variable values of the parent container can be seen in child containers but cannot be passed back up to the parent from a child. Our following graphic has had a number of changes made. The first of those changes is that we have created and assigned an OLEDB connection manager to this Task ExecuteSQL Task Connection. The next thing is we have made sure that the SQLSourceType property is set to Direct Input as we will be writing in our statement ourselves. We have also specified that only a single row will be returned from this query. The expressions we typed in was: SELECT COUNT(*) AS CountOfEmployees FROM HumanResources.Employee WHERE (HireDate BETWEEN ? AND ?) Moving on now to the Parameter Mapping tab this is where we are going to tell the task about our input paramaters. We Add them to the window specifying their direction and datatype. A quick word here about the structure of the variable name. As you can see SSIS has preceeded the variable with the word user. This is a default namespace for variables but you can create your own. When defining your variables if you look at the variables window title bar you will see some icons. If you hover over the last one on the right you will see it says "Choose Variable Columns". If you click the button you will see a list of checkbox options and one of them is namespace. after checking this you will see now where you can define your own namespace. The next tab, result set, is where we need to get back the value(s) returned from our statement and assign to a variable which in our case is CountOfEmployees so we can use it later perhaps. Because we are only returning a single value then if you remember from earlier we are allowed to assign a name to the resultset but it must be the name of the column (or alias) from the query. A really cool feature of Business Intelligence Studio being hosted by Visual Studio is that we get breakpoint support for free. In our package we set a Breakpoint so we can break the package and have a look in a watch window at the variable values as they appear to our task and what the variable value of our resultset is after the task has done the assignment. Here's that window now. As you can see the count of employess that matched the data range was 2. Returning a Rowset In this example we are going to return a resultset back to a variable after the task has executed not just a single row single value. There are no input parameters required so the variables window is nice and straight forward. One variable of type object. Here is the statement that will form the soure for our Resultset. select p.ProductNumber, p.name, pc.Name as ProductCategoryNameFROM Production.ProductCategory pcJOIN Production.ProductSubCategory pscON pc.ProductCategoryID = psc.ProductCategoryIDJOIN Production.Product pON psc.ProductSubCategoryID = p.ProductSubCategoryID We need to make sure that we have selected Full result set as the ResultSet as shown below on the task's General tab. Because there are no input parameters we can skip the parameter mapping tab and move straight to the Result Set tab. Here we need to Add our variable defined earlier and map it to the result name of 0 (remember we covered this earlier) Once we run the task we can again set a breakpoint and have a look at the values coming back from the task. In the following graphic you can see the result set returned to us as a COM object. We can do some pretty interesting things with this COM object and in later articles that is exactly what we shall be doing. Return Values, Input/Output Parameters and Returning a Rowset from a Stored Procedure This example is pretty much going to give us a taste of everything. We have already covered in the previous example how to specify the ResultSet to be a Full result set so we will not cover it again here. For this example we are going to need 4 variables. One for the return value, one for the input parameter, one for the output parameter and one for the result set. Here is the statement we want to execute. Note how much cleaner it is than if you wanted to do it using the current version of DTS. In the Parameter Mapping tab we are going to Add our variables and specify their direction and datatypes. In the Result Set tab we can now map our final variable to the rowset returned from the stored procedure. It really is as simple as that and we were amazed at how much easier it is than in DTS 2000. Passing in the SQL Statement from a Variable SSIS as we have mentioned is hugely more flexible than its predecessor and one of the things you will notice when moving around the tasks and the adapters is that a lot of them accept a variable as an input for something they need. The ExecuteSQL task is no different. It will allow us to pass in a string variable as the SQL Statement. This variable value could have been set earlier on from inside the package or it could have been populated from outside using a configuration. The ResultSet property is set to single row and we'll show you why in a second when we look at the variables. Note also the SQLSourceType property. Here's the General Tab again. Looking at the variable we have in this package you can see we have only two. One for the return value from the statement and one which is obviously for the statement itself. Again we need to map the Result name to our variable and this can be a named Result Name (The column name or alias returned by the query) and not 0. The expected result into our variable should be the amount of rows in the Person.Contact table and if we look in the watch window we see that it is.   Passing in the SQL Statement from a File The final example we are going to show is a really interesting one. We are going to pass in the SQL statement to the task by using a file connection manager. The file itself contains the statement to run. The first thing we are going to need to do is create our file connection mananger to point to our file. Click in the connections tray at the bottom of the designer, right click and choose "New File Connection" As you can see in the graphic below we have chosen to use an existing file and have passed in the name as well. Have a look around at the other "Usage Type" values available whilst you are here. Having set that up we can now see in the connection manager tray our file connection manager sitting alongside our OLE-DB connection we have been using for the rest of these examples. Now we can go back to the familiar General Tab to set up how the task will accept our file connection as the source. All the other properties in this task are set up exactly as we have been doing for other examples depending on the options chosen so we will not cover them again here.   We hope you will agree that the Execute SQL Task has changed considerably in this release from its DTS predecessor. It has a lot of options available but once you have configured it a few times you get to learn what needs to go where. We hope you have found this article useful.

    Read the article

  • C#/.NET Little Wonders: ConcurrentBag and BlockingCollection

    - by James Michael Hare
    In the first week of concurrent collections, began with a general introduction and discussed the ConcurrentStack<T> and ConcurrentQueue<T>.  The last post discussed the ConcurrentDictionary<T> .  Finally this week, we shall close with a discussion of the ConcurrentBag<T> and BlockingCollection<T>. For more of the "Little Wonders" posts, see C#/.NET Little Wonders: A Redux. Recap As you'll recall from the previous posts, the original collections were object-based containers that accomplished synchronization through a Synchronized member.  With the advent of .NET 2.0, the original collections were succeeded by the generic collections which are fully type-safe, but eschew automatic synchronization.  With .NET 4.0, a new breed of collections was born in the System.Collections.Concurrent namespace.  Of these, the final concurrent collection we will examine is the ConcurrentBag and a very useful wrapper class called the BlockingCollection. For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this informative whitepaper by the Microsoft Parallel Computing Platform team here. ConcurrentBag<T> – Thread-safe unordered collection. Unlike the other concurrent collections, the ConcurrentBag<T> has no non-concurrent counterpart in the .NET collections libraries.  Items can be added and removed from a bag just like any other collection, but unlike the other collections, the items are not maintained in any order.  This makes the bag handy for those cases when all you care about is that the data be consumed eventually, without regard for order of consumption or even fairness – that is, it’s possible new items could be consumed before older items given the right circumstances for a period of time. So why would you ever want a container that can be unfair?  Well, to look at it another way, you can use a ConcurrentQueue and get the fairness, but it comes at a cost in that the ordering rules and synchronization required to maintain that ordering can affect scalability a bit.  Thus sometimes the bag is great when you want the fastest way to get the next item to process, and don’t care what item it is or how long its been waiting. The way that the ConcurrentBag works is to take advantage of the new ThreadLocal<T> type (new in System.Threading for .NET 4.0) so that each thread using the bag has a list local to just that thread.  This means that adding or removing to a thread-local list requires very low synchronization.  The problem comes in where a thread goes to consume an item but it’s local list is empty.  In this case the bag performs “work-stealing” where it will rob an item from another thread that has items in its list.  This requires a higher level of synchronization which adds a bit of overhead to the take operation. So, as you can imagine, this makes the ConcurrentBag good for situations where each thread both produces and consumes items from the bag, but it would be less-than-idea in situations where some threads are dedicated producers and the other threads are dedicated consumers because the work-stealing synchronization would outweigh the thread-local optimization for a thread taking its own items. Like the other concurrent collections, there are some curiosities to keep in mind: IsEmpty(), Count, ToArray(), and GetEnumerator() lock collection Each of these needs to take a snapshot of whole bag to determine if empty, thus they tend to be more expensive and cause Add() and Take() operations to block. ToArray() and GetEnumerator() are static snapshots Because it is based on a snapshot, will not show subsequent updates after snapshot. Add() is lightweight Since adding to the thread-local list, there is very little overhead on Add. TryTake() is lightweight if items in thread-local list As long as items are in the thread-local list, TryTake() is very lightweight, much more so than ConcurrentStack() and ConcurrentQueue(), however if the local thread list is empty, it must steal work from another thread, which is more expensive. Remember, a bag is not ideal for all situations, it is mainly ideal for situations where a process consumes an item and either decomposes it into more items to be processed, or handles the item partially and places it back to be processed again until some point when it will complete.  The main point is that the bag works best when each thread both takes and adds items. For example, we could create a totally contrived example where perhaps we want to see the largest power of a number before it crosses a certain threshold.  Yes, obviously we could easily do this with a log function, but bare with me while I use this contrived example for simplicity. So let’s say we have a work function that will take a Tuple out of a bag, this Tuple will contain two ints.  The first int is the original number, and the second int is the last multiple of that number.  So we could load our bag with the initial values (let’s say we want to know the last multiple of each of 2, 3, 5, and 7 under 100. 1: var bag = new ConcurrentBag<Tuple<int, int>> 2: { 3: Tuple.Create(2, 1), 4: Tuple.Create(3, 1), 5: Tuple.Create(5, 1), 6: Tuple.Create(7, 1) 7: }; Then we can create a method that given the bag, will take out an item, apply the multiplier again, 1: public static void FindHighestPowerUnder(ConcurrentBag<Tuple<int,int>> bag, int threshold) 2: { 3: Tuple<int,int> pair; 4:  5: // while there are items to take, this will prefer local first, then steal if no local 6: while (bag.TryTake(out pair)) 7: { 8: // look at next power 9: var result = Math.Pow(pair.Item1, pair.Item2 + 1); 10:  11: if (result < threshold) 12: { 13: // if smaller than threshold bump power by 1 14: bag.Add(Tuple.Create(pair.Item1, pair.Item2 + 1)); 15: } 16: else 17: { 18: // otherwise, we're done 19: Console.WriteLine("Highest power of {0} under {3} is {0}^{1} = {2}.", 20: pair.Item1, pair.Item2, Math.Pow(pair.Item1, pair.Item2), threshold); 21: } 22: } 23: } Now that we have this, we can load up this method as an Action into our Tasks and run it: 1: // create array of tasks, start all, wait for all 2: var tasks = new[] 3: { 4: new Task(() => FindHighestPowerUnder(bag, 100)), 5: new Task(() => FindHighestPowerUnder(bag, 100)), 6: }; 7:  8: Array.ForEach(tasks, t => t.Start()); 9:  10: Task.WaitAll(tasks); Totally contrived, I know, but keep in mind the main point!  When you have a thread or task that operates on an item, and then puts it back for further consumption – or decomposes an item into further sub-items to be processed – you should consider a ConcurrentBag as the thread-local lists will allow for quick processing.  However, if you need ordering or if your processes are dedicated producers or consumers, this collection is not ideal.  As with anything, you should performance test as your mileage will vary depending on your situation! BlockingCollection<T> – A producers & consumers pattern collection The BlockingCollection<T> can be treated like a collection in its own right, but in reality it adds a producers and consumers paradigm to any collection that implements the interface IProducerConsumerCollection<T>.  If you don’t specify one at the time of construction, it will use a ConcurrentQueue<T> as its underlying store. If you don’t want to use the ConcurrentQueue, the ConcurrentStack and ConcurrentBag also implement the interface (though ConcurrentDictionary does not).  In addition, you are of course free to create your own implementation of the interface. So, for those who don’t remember the producers and consumers classical computer-science problem, the gist of it is that you have one (or more) processes that are creating items (producers) and one (or more) processes that are consuming these items (consumers).  Now, the crux of the problem is that there is a bin (queue) where the produced items are placed, and typically that bin has a limited size.  Thus if a producer creates an item, but there is no space to store it, it must wait until an item is consumed.  Also if a consumer goes to consume an item and none exists, it must wait until an item is produced. The BlockingCollection makes it trivial to implement any standard producers/consumers process set by providing that “bin” where the items can be produced into and consumed from with the appropriate blocking operations.  In addition, you can specify whether the bin should have a limited size or can be (theoretically) unbounded, and you can specify timeouts on the blocking operations. As far as your choice of “bin”, for the most part the ConcurrentQueue is the right choice because it is fairly light and maximizes fairness by ordering items so that they are consumed in the same order they are produced.  You can use the concurrent bag or stack, of course, but your ordering would be random-ish in the case of the former and LIFO in the case of the latter. So let’s look at some of the methods of note in BlockingCollection: BoundedCapacity returns capacity of the “bin” If the bin is unbounded, the capacity is int.MaxValue. Count returns an internally-kept count of items This makes it O(1), but if you modify underlying collection directly (not recommended) it is unreliable. CompleteAdding() is used to cut off further adds. This sets IsAddingCompleted and begins to wind down consumers once empty. IsAddingCompleted is true when producers are “done”. Once you are done producing, should complete the add process to alert consumers. IsCompleted is true when producers are “done” and “bin” is empty. Once you mark the producers done, and all items removed, this will be true. Add() is a blocking add to collection. If bin is full, will wait till space frees up Take() is a blocking remove from collection. If bin is empty, will wait until item is produced or adding is completed. GetConsumingEnumerable() is used to iterate and consume items. Unlike the standard enumerator, this one consumes the items instead of iteration. TryAdd() attempts add but does not block completely If adding would block, returns false instead, can specify TimeSpan to wait before stopping. TryTake() attempts to take but does not block completely Like TryAdd(), if taking would block, returns false instead, can specify TimeSpan to wait. Note the use of CompleteAdding() to signal the BlockingCollection that nothing else should be added.  This means that any attempts to TryAdd() or Add() after marked completed will throw an InvalidOperationException.  In addition, once adding is complete you can still continue to TryTake() and Take() until the bin is empty, and then Take() will throw the InvalidOperationException and TryTake() will return false. So let’s create a simple program to try this out.  Let’s say that you have one process that will be producing items, but a slower consumer process that handles them.  This gives us a chance to peek inside what happens when the bin is bounded (by default, the bin is NOT bounded). 1: var bin = new BlockingCollection<int>(5); Now, we create a method to produce items: 1: public static void ProduceItems(BlockingCollection<int> bin, int numToProduce) 2: { 3: for (int i = 0; i < numToProduce; i++) 4: { 5: // try for 10 ms to add an item 6: while (!bin.TryAdd(i, TimeSpan.FromMilliseconds(10))) 7: { 8: Console.WriteLine("Bin is full, retrying..."); 9: } 10: } 11:  12: // once done producing, call CompleteAdding() 13: Console.WriteLine("Adding is completed."); 14: bin.CompleteAdding(); 15: } And one to consume them: 1: public static void ConsumeItems(BlockingCollection<int> bin) 2: { 3: // This will only be true if CompleteAdding() was called AND the bin is empty. 4: while (!bin.IsCompleted) 5: { 6: int item; 7:  8: if (!bin.TryTake(out item, TimeSpan.FromMilliseconds(10))) 9: { 10: Console.WriteLine("Bin is empty, retrying..."); 11: } 12: else 13: { 14: Console.WriteLine("Consuming item {0}.", item); 15: Thread.Sleep(TimeSpan.FromMilliseconds(20)); 16: } 17: } 18: } Then we can fire them off: 1: // create one producer and two consumers 2: var tasks = new[] 3: { 4: new Task(() => ProduceItems(bin, 20)), 5: new Task(() => ConsumeItems(bin)), 6: new Task(() => ConsumeItems(bin)), 7: }; 8:  9: Array.ForEach(tasks, t => t.Start()); 10:  11: Task.WaitAll(tasks); Notice that the producer is faster than the consumer, thus it should be hitting a full bin often and displaying the message after it times out on TryAdd(). 1: Consuming item 0. 2: Consuming item 1. 3: Bin is full, retrying... 4: Bin is full, retrying... 5: Consuming item 3. 6: Consuming item 2. 7: Bin is full, retrying... 8: Consuming item 4. 9: Consuming item 5. 10: Bin is full, retrying... 11: Consuming item 6. 12: Consuming item 7. 13: Bin is full, retrying... 14: Consuming item 8. 15: Consuming item 9. 16: Bin is full, retrying... 17: Consuming item 10. 18: Consuming item 11. 19: Bin is full, retrying... 20: Consuming item 12. 21: Consuming item 13. 22: Bin is full, retrying... 23: Bin is full, retrying... 24: Consuming item 14. 25: Adding is completed. 26: Consuming item 15. 27: Consuming item 16. 28: Consuming item 17. 29: Consuming item 19. 30: Consuming item 18. Also notice that once CompleteAdding() is called and the bin is empty, the IsCompleted property returns true, and the consumers will exit. Summary The ConcurrentBag is an interesting collection that can be used to optimize concurrency scenarios where tasks or threads both produce and consume items.  In this way, it will choose to consume its own work if available, and then steal if not.  However, in situations where you want fair consumption or ordering, or in situations where the producers and consumers are distinct processes, the bag is not optimal. The BlockingCollection is a great wrapper around all of the concurrent queue, stack, and bag that allows you to add producer and consumer semantics easily including waiting when the bin is full or empty. That’s the end of my dive into the concurrent collections.  I’d also strongly recommend, once again, you read this excellent Microsoft white paper that goes into much greater detail on the efficiencies you can gain using these collections judiciously (here). Tweet Technorati Tags: C#,.NET,Concurrent Collections,Little Wonders

    Read the article

< Previous Page | 6 7 8 9 10 11  | Next Page >