Search Results

Search found 17406 results on 697 pages for 'option explicit'.

Page 652/697 | < Previous Page | 648 649 650 651 652 653 654 655 656 657 658 659  | Next Page >

  • The Unintended Consequences of Sound Security Policy

    - by Tanu Sood
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Author: Kevin Moulton, CISSP, CISM Meet the Author: Kevin Moulton, Senior Sales Consulting Manager, Oracle Kevin Moulton, CISSP, CISM, has been in the security space for more than 25 years, and with Oracle for 7 years. He manages the East Enterprise Security Sales Consulting Team. He is also a Distinguished Toastmaster. Follow Kevin on Twitter at twitter.com/kevin_moulton, where he sometimes tweets about security, but might also tweet about running, beer, food, baseball, football, good books, or whatever else grabs his attention. Kevin will be a regular contributor to this blog so stay tuned for more posts from him. When I speak to a room of IT administrators, I like to begin by asking them if they have implemented a complex password policy. Generally, they all nod their heads enthusiastically. I ask them if that password policy requires long passwords. More nodding. I ask if that policy requires upper and lower case letters – faster nodding – numbers – even faster – special characters – enthusiastic nodding all around! I then ask them if their policy also includes a requirement for users to regularly change their passwords. Now we have smiles with the nodding! I ask them if the users have different IDs and passwords on the many systems that they have access to. Of course! I then ask them if, when they walk around the building, they see something like this: Thanks to Jake Ludington for the nice example. Can these administrators be faulted for their policies? Probably not but, in the end, end-users will find a way to get their job done efficiently. Post-It Notes to the rescue! I was visiting a business in New York City one day which was a perfect example of this problem. First I walked up to the security desk and told them where I was headed. They asked me if they should call upstairs to have someone escort me. Is that my call? Is that policy? I said that I knew where I was going, so they let me go. Having the conference room number handy, I wandered around the place in a search of my destination. As I walked around, unescorted, I noticed the post-it note problem in abundance. Had I been so inclined, I could have logged in on almost any machine and into any number of systems. When I reached my intended conference room, I mentioned my post-it note observation to the two gentlemen with whom I was meeting. One of them said, “You mean like this,” and he produced a post it note full of login IDs and passwords from his breast pocket! I gave him kudos for not hanging the list on his monitor. We then talked for the rest of the meeting about the difficulties faced by the employees due to the security policies. These policies, although well-intended, made life very difficult for the end-users. Most users had access to 8 to 12 systems, and the passwords for each expired at a different times. The post-it note solution was understandable. Who could remember even half of them? What could this customer have done differently? I am a fan of using a provisioning system, such as Oracle Identity Manager, to manage all of the target systems. With OIM, and email could be automatically sent to all users when it was time to change their password. The end-users would follow a link to change their password on a web page, and then OIM would propagate that password out to all of the systems that the user had access to, even if the login IDs were different. Another option would be an Enterprise Single-Sign On Solution. With Oracle eSSO, all of a user’s credentials would be stored in a central, encrypted credential store. The end-user would only have to login to their machine each morning and then, as they moved to each new system, Oracle eSSO would supply the credentials. Good-bye post-it notes! 3M may be disappointed, but your end users will thank you. I hear people say that this post-it note problem is not a big deal, because the only people who would see the passwords are fellow employees. Do you really know who is walking around your building? What are the password policies in your business? How do the end-users respond?

    Read the article

  • How to remap Fn key combinations (Lenovo G500)

    - by Anatoli
    I am running Kubuntu 13.10 on a Lenovo G500 laptop. My question is similar to this one: How can I remap my F keys on my HP laptop? That is to say, my F1-F12 keys are mapped to certain special functions, and only holding down the Fn key restores access to the standard F1-F12 keys. How do I remap certain keys? I would like to know if there is a way to remap Fx to Fn+Fx and vice-versa. As per the instructions of #87043 I checked my BIOS and there is no option to switch the Fx/Fn key functionality. Googling through Leonovo's support forums indicates a BIOS update enabling this is in the works, but there's no indication of when it will be complete. Using xev I was able to see what X sees when F1-F12 are pressed. Some send separate keycodes, but some are somehow mapped to key combinations or other unknown things: F1 - XF86AudioMute F2 - XF86AudioVolumeLower F3 - XF86AudioVolumeRaise F4 - Alt_L + F4 F5 - F5 F6 - Disables touchapd, cannot quite understand what xev tells me is happening, reenables if disabled (Kernel log reveals these have well-defined scancodes not assigned to any keycodes) F7 - XF86WLAN F8 - Alt_L + Ctrl_L + Tab F9 - Turns off LCD backlight, xev sees nothing F10 - Super_L + p F11 - XF86MonBrightnessLower F12 - XF86MonBrightnessRaise Following the instrusctions on this page: How do I remap certain keys? I remapped all the keys that have definite keycodes (F1, F2, F3, F5, F7, F11, F12) This still leaves the F4, F6, F8, F9, F10 keys not functioning properly. This is especially frustarting since F4, F6, F9 now kill the current window, the touchpad and screen, respectively. Any help on remapping these keys to their proper functions would be much appreciated! -Anatoli xev output for these 5 keys: F4 KeyPress event, serial 40, synthetic NO, window 0x4800001, root 0x9d, subw 0x0, time 3674037, (228,298), root:(911,321), state 0x0, keycode 64 (keysym 0xffe9, Alt_L), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False FocusOut event, serial 40, synthetic NO, window 0x4800001, mode NotifyGrab, detail NotifyAncestor FocusIn event, serial 40, synthetic NO, window 0x4800001, mode NotifyUngrab, detail NotifyAncestor KeymapNotify event, serial 40, synthetic NO, window 0x0, keys: 4294967197 0 0 0 0 0 0 0 65 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 KeyRelease event, serial 40, synthetic NO, window 0x4800001, root 0x9d, subw 0x0, time 3674040, (228,298), root:(911,321), state 0x8, keycode 70 (keysym 0xffc1, F4), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False KeyRelease event, serial 40, synthetic NO, window 0x4800001, root 0x9d, subw 0x0, time 3674042, (228,298), root:(911,321), state 0x8, keycode 64 (keysym 0xffe9, Alt_L), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False ClientMessage event, serial 40, synthetic YES, window 0x4800001, message_type 0x12a (WM_PROTOCOLS), format 32, message 0x12b (WM_DELETE_WINDOW) F6 disabling touchpad MappingNotify event, serial 40, synthetic NO, window 0x0, request MappingKeyboard, first_keycode 8, count 248 FocusOut event, serial 40, synthetic NO, window 0x4600001, mode NotifyGrab, detail NotifyAncestor FocusIn event, serial 40, synthetic NO, window 0x4600001, mode NotifyUngrab, detail NotifyAncestor KeymapNotify event, serial 40, synthetic NO, window 0x0, keys: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 MappingNotify event, serial 41, synthetic NO, window 0x0, request MappingKeyboard, first_keycode 8, count 248 F6 enabling touchpad MappingNotify event, serial 42, synthetic NO, window 0x0, request MappingKeyboard, first_keycode 8, count 248 FocusOut event, serial 42, synthetic NO, window 0x4600001, mode NotifyGrab, detail NotifyAncestor FocusIn event, serial 42, synthetic NO, window 0x4600001, mode NotifyUngrab, detail NotifyAncestor KeymapNotify event, serial 42, synthetic NO, window 0x0, keys: 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 MappingNotify event, serial 43, synthetic NO, window 0x0, request MappingPointer, first_keycode 0, count 0 F8 doing whatever it is F8 does KeyPress event, serial 40, synthetic NO, window 0x4600001, root 0x9d, subw 0x0, time 3508985, (13,-12), root:(696,11), state 0x0, keycode 64 (keysym 0xffe9, Alt_L), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False KeyPress event, serial 40, synthetic NO, window 0x4600001, root 0x9d, subw 0x0, time 3508986, (13,-12), root:(696,11), state 0x8, keycode 37 (keysym 0xffe3, Control_L), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False KeyPress event, serial 40, synthetic NO, window 0x4600001, root 0x9d, subw 0x0, time 3508988, (13,-12), root:(696,11), state 0xc, keycode 23 (keysym 0xff09, Tab), same_screen YES, XLookupString gives 1 bytes: (09) " " XmbLookupString gives 1 bytes: (09) " " XFilterEvent returns: False KeyRelease event, serial 40, synthetic NO, window 0x4600001, root 0x9d, subw 0x0, time 3508989, (13,-12), root:(696,11), state 0xc, keycode 64 (keysym 0xffe9, Alt_L), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False KeyRelease event, serial 40, synthetic NO, window 0x4600001, root 0x9d, subw 0x0, time 3508991, (13,-12), root:(696,11), state 0x4, keycode 37 (keysym 0xffe3, Control_L), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False KeyRelease event, serial 40, synthetic NO, window 0x4600001, root 0x9d, subw 0x0, time 3508994, (13,-12), root:(696,11), state 0x0, keycode 23 (keysym 0xff09, Tab), same_screen YES, XLookupString gives 1 bytes: (09) " " XFilterEvent returns: False F9 gives no output to xev F10 doing whatever it is F10 does KeyRelease event, serial 40, synthetic NO, window 0x4600001, root 0x9d, subw 0x0, time 3586076, (9,-14), root:(692,9), state 0x0, keycode 10 (keysym 0x31, 1), same_screen YES, XLookupString gives 1 bytes: (31) "1" XFilterEvent returns: False KeyPress event, serial 40, synthetic NO, window 0x4600001, root 0x9d, subw 0x0, time 3586552, (9,-14), root:(692,9), state 0x0, keycode 133 (keysym 0xffeb, Super_L), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False KeyPress event, serial 40, synthetic NO, window 0x4600001, root 0x9d, subw 0x0, time 3586554, (9,-14), root:(692,9), state 0x40, keycode 33 (keysym 0x70, p), same_screen YES, XLookupString gives 1 bytes: (70) "p" XmbLookupString gives 1 bytes: (70) "p" XFilterEvent returns: False KeyRelease event, serial 40, synthetic NO, window 0x4600001, root 0x9d, subw 0x0, time 3586557, (9,-14), root:(692,9), state 0x40, keycode 33 (keysym 0x70, p), same_screen YES, XLookupString gives 1 bytes: (70) "p" XFilterEvent returns: False KeyRelease event, serial 40, synthetic NO, window 0x4600001, root 0x9d, subw 0x0, time 3586560, (9,-14), root:(692,9), state 0x40, keycode 133 (keysym 0xffeb, Super_L), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False

    Read the article

  • Specs, Form and Function – What am I Missing?

    - by Barry Shulam
    0 0 1 628 3586 08041 29 8 4206 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Friday October 26th the Microsoft Surface RT arrived at the office.  I was summoned to my boss’s office for the grand unpacking.  If I had planned ahead I could have used my iPhone 4 to film the event and post it on YouTube however the desire to hold the device and turn it ON was more inviting than becoming a proxy reviewer for Engadget’s website.  1980 was the first time we had a personal computer in our house.  It was a  Kaypro computer. It weighed 29 pounds more than any persons lap could hold.  Then the term “portable computer” meant you could remove it from the building and take it else where.  Today I am typing on this entry on a Macbook Air which weighs 2.38 pounds. This morning Amazons front page main title is: “Much More for Much Less” I was born at the right time to start with the CPM operating system on the Kaypro thru the DOS, Windows, Linux, Mac OSX and mobile phone operating systems and languages.  If you are not aware Technology is moving at a rapid pace.  The New iPad (those who are keeping score – iPad4) is replacing a 7 month old machine the New iPad (iPad 3) I have used and owned many technology devices in my life.  The main point that most of the reader who are in the USA overlook is the fact that we are in the USA.  The devices we purchase have a great digital garden to support them.  The Kaypro computer had a 7-inch screen.  It was a TV tube with two colors – Black and Green.  You could see the 80-column screen flicker with characters – have you every played Pac-Man emulated on the screen with the ABC characters. Traveling across the world you will find that not all apps on your device will function as they did back home because they are not offered outside of your country of origin. I think the main question a buyer of technology should be asking is Function.  The greatest Specs with out function limit you.  The most beautiful form with out function is the same as a crystal vase on your shelf – not a good cereal bowl in the morning. Microsoft Surface RT, Amazon Kindle Fire and Apple iPad all great devices in their respective customers hands. My advice for those looking to purchase on this year:  If the device is your only technology device you buy what you WANT and LIKE. Consider this parallel universe if its not your only device?  Ever go shopping for clothing, shoes, and accessories with your wife, girlfriend, sister or mother?  If you listen carefully you will hear the little voices coming out of there heads saying:  “This goes well with that and I can use it also with that outfit” ”Do you think this clashes with that?”  “Ohh I love how that combination looks on you”.  Portable devices such as tablets and computers can offer a whole lot more when they are combined with the digital echo system you have at home and the manufacturer offers online. Pros of each Device: Microsoft Surface RT: There is a new functionality named SmartGlass which will let you share the content off your tablet to your XBOX 360.  Microsoft office is loaded on the tablet.  You can have more than one user profile on the tablet if you share it with others.   Amazon Kindle or Kindle HD: If you are an Amazon consumer with an annual Amazon Prime service you can consume videos and read books off the Amazon site.  Its the cheapest device.  Its a step up from the kindle reader in many ways.   Apple Ipad or Ipad mini: Over 270 Thousand applications.  Airplay permits you the ability to share to your TV screen. If you are a cord cutter (a person who gets their entertainment content over the web or air vs Cable Providers) the Airplay or Smart glass are a huge bonus.  iPad mini or not: The mini will fit in a purse where the larger one will not.  Its lighter which makes it nice to hold for prolonged periods.  It has an option for LTE wireless which non of the other sub 9 inch tables offer.  The screen is non retina which means the applications are smaller.  Speaking with individuals who are above 50 in age that wear glasses they retina does not make a difference for them however they prefer the larger iPad over the new mini.   Happy Shopping this Channuka Season.   The Kosher Coder.   Follow me on twitter @KosherCoder

    Read the article

  • Day 1 - Finding Like Minds

    - by dapostolov
    So, is being a Game Developer any different from being an IT Developer? I picture a poorly lit environment where I get to purchase my own desk lamp; I'm thinking one of those huge lava lamps that pump out so much heat you could fry an egg on it. To my right: a "great wall" of empty coke cans dwarf me. Eating my last slice of pizza I look across my desk to see a fellow developer with a smug look on his face;  he's just coded his latest module for the game and it looks like he's in nirvana. My duty, of course, is to remind him to keep focused on the job at hand. So, picking up my trusty elastic and aerodynamically crafted paper bullet I begin a 10 minute war of welts and laughter which is promptly abrupted by our Project Manager demanding more details from our morning Scrum meeting. After providing about 5 minutes of geek speak and several words of comfort to make his eyes glaze over...it hits me, the idea for the module...beckoning my developer friend over, we quickly shoo the Project Manager away and begin our brainstorming frenzy ... now, where'd I put that full can of coke? OK. OK. This isn't probably the most ideal game developer environment, but it definitely sounds fun to me...and from what I gather is nothing like most game development companies. But I'm not doing this blog series to "go pro"; like I stated in my first post I want to make a 2D game from an idea my best friend and I drummed up long, long ago. I'm in this for the passion AND I want to see how easy it is for us .Net Developers to create a game. So where do I start? Where can I find like minded individuals? What technologies are there? What do I need to make a video game? The questions are endless....AND...since I already have an idea ... lets start with ... Technology (yes, I'm a geek, live with it...) Technology OK. Predominantly, games are still made in C++ or even C. I'm not sure how much assembly code is floating around lately, however, that is not my concern. I do know C / C++ from my past, enough to even get me by, but I'm mainly interested in a recent, not-so-new, technology called XNA. What is XNA? XNA allows us .Net Developers to make 2D / 3D games for windows, Xbox*, and Windows Mobile 7*. * = for a nominal fee *cough* The following link is your one stop shop to XNA game development: http://creators.xna.com/en-US/education/gettingstarted The above site hosts information such as: - getting started - a sample/instructional shooter game in 2D / 3D with code (if I'm taking too long for you in this blog series) - downloads - starter kits... http://creators.xna.com/en-US/education/starterkits/ And of course...forums. You can also subscribe and pay for their premium membership which gets you some pretty awesome tutorials, resources, downloads, and premium community support. Some general Wiki information about XNA: http://en.wikipedia.org/wiki/XNA_%28Microsoft%29 Community Support OK. Let's move on to industry and community support. Apart from XNA, there are some really cool sites out there, I just haven't found all of them yet. However, I found a really cool Game Development website called Gamastura. You can click on the following link to get you there: http://www.gamasutra.com/ The site is 100% dedicated to "The Art & Business of Making Games". Armed with blogs, twitter, jobs/resumes and most importantly industry news; one could subscribe to the feed and got lost in the wealth of information it provides. On a side note: I remember Gamasutra being around when my best friend and I wanted to make a video game...meaning, they've been around for a while now. I think the most beneficial aspect of this site is to understand the industry you want to get into. Otherwise, it's just a cool site to keep up to date with the industry in general. Another Community Support option is LinkedIn. Amongst the land of extremely bloated achievements and responsibilities lay 3 groups (that I have found) that deal with game development.: http://www.linkedin.com/groups?gid=59205 - Game Developers http://www.linkedin.com/groups?gid=824817 - DirectX Game Developer Network http://www.linkedin.com/groups?gid=756587 - DirectX Developers The Game Developers group in LinkedIn is by far the most active of the three and could possibly provide a wealth of support. What I've done thus far: - I lightly researched the XNA technology - I looked around for some community sites to assist me - I downloaded the XNA Game Studio 3.1 on my PC and installed it on my IDE - I even tried both tutorials! http://creators.xna.com/en-US/education/gettingstarted/bgintro/chapter1   Best Regards D.

    Read the article

  • Sending Outlook Invites

    - by Daniel Moth
    Sending an Outlook invite for a meeting (also referred to as S+ in Microsoft) is a simple thing to get right if you just run the quick mental check below, which is driven by visual cues in the Outlook UI. I know that some folks don’t do this often or are new to Outlook, so if you know one of those folks share this blog post with them and if they read nothing else ask them to read step 7. Add on the To line the folks that you want to be at the meeting. Indicate optional invitees. Click on the “To” button to bring up the dialog that lets you move folks to be Optional (you can also do this from the Scheduling Assistant). Set the Reminder according to the attendee that has to travel the most. 5 minutes is the minimum. Use the Response Options and uncheck the "Request Response" if your event is going ahead regardless of who can make it or not, i.e. if everyone is optional. Don’t force every recipient to make an extra click, instead make the extra click yourself - you are the organizer. Add a good subject Make the subject such that just by reading it folks know what the meeting is about. Examples, e.g. "Review…", "Finalize…", "XYZ sync up" If this is only between two people and what is commonly referred to as a one to one, the subject would be something like "MyName/YourName 1:1" Write the subject in such a way that when the recipient sees this on their calendar among all the other items, they know what this meeting is about without having to see location, recipients, or any other information about the invite. Add a location, typically a meeting room. If recipients are from different buildings, schedule it where the folks that are doing the other folks a favor live. Otherwise schedule it wherever the least amount of people will have to travel. If you send me an invite to come to your building, and there is more of us than you, you are silently sending me the message that you are doing me a favor so if you don’t want to do that, include a note of why this is in your building, e.g. "Sorry we are slammed with back to back meetings today so hope you can come over to our building". If this is in someone's office, the location would be something like "Moth's office (7/666)" where in parenthesis you see the office location. If some folks are remote in another building/country, or if you know you picked a time which wasn't free for everyone, add an Online option (click the Lync Meeting button). Add a date and time. This MUST be at a time that is showing on the recipients’ calendar as FREE or at worst TENTATIVE. You can check that on the Scheduling Assistant. The reality is that this is not always possible, so in that case you MUST say something about it in the Invite Body, e.g. "Sorry I can see X has a conflict, but I cannot find a better slot", or "With so many of us there are some conflicts and I cannot find a better slot so hope this works", or "Apologies but due to Y we must have this meeting at this time and I know there are some conflicts, hope you can make it anyway". When you do that, I better not be able to find a better slot myself for all of us, and of course when you do that you have implicitly designated the Busy folks as optional. Finally, the body of the invite. This has the agenda of the meeting and if applicable the courtesy apologies due to messing up steps 6 & 7. This should not be the introduction to the meeting, in other words the recipients should not be surprised when they see the invite and go to the body to read it. Notifying them of the meeting takes place via separate email where you explain the purpose and give them a heads up that you'll be sending an invite. That separate email is also your chance to attach documents, don’t do that as part of the invite. TIP: If you have sent mail about the meeting, you can then go to your sent folder to select the message and click the "Meeting" button (Ctrl+Alt+R). This will populate the body with the necessary background, auto select the mandatory and optional attendees as per the TO/CC line, and have a subject that may be good enough already (or you can tweak it). Long to write, but very quick to remember and enforce since most of it is common sense and the checklist is driven of the visual cues in the UI you use to send the invite. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Telerik Releases a new Visual Entity Designer

    Love LINQ to SQL but are concerned that it is a second class citizen? Need to connect to more databases other than SQL Server? Think that the Entity Framework is too complex? Want a domain model designer for data access that is easy, yet powerful? Then the Telerik Visual Entity Designer is for you. Built on top of Telerik OpenAccess ORM, a very mature and robust product, Teleriks Visual Entity Designer is a new way to build your domain model that is very powerful and also real easy to use. How easy? Ill show you here. First Look: Using the Telerik Visual Entity Designer To get started, you need to install the Telerik OpenAccess ORM Q1 release for Visual Studio 2008 or 2010. You dont need to use any of the Telerik OpenAccess wizards, designers, or using statements. Just right click on your project and select Add|New Item from the context menu. Choose Telerik OpenAccess Domain Model from the Visual Studio project templates. (Note to existing OpenAccess users, dont run the Enable ORM wizard or any other OpenAccess menu unless you are building OpenAccess Entities.) You will then have to specify the database backend (SQL Server, SQL Azure, Oracle, MySQL, etc) and connection. After you establish your connection, select the database objects you want to add to your domain model. You can also name your model, by default it will be NameofyourdatabaseEntityDiagrams. You can click finish here if you are comfortable, or tweak some advanced settings. Many users of domain models like to add prefixes and suffixes to classes, fields, and properties as well as handle pluralization. I personally accept the defaults, however, I hate how DBAs force underscores on me, so I click on the option to remove them. You can also tweak your namespace, mapping options, and define your own code generation template to gain further control over the outputted code. This is a very powerful feature, but for now, I will just accept the defaults.   When we click finish, you can see your domain model as a file with the .rlinq extension in the Solution Explorer. You can also bring up the visual designer to view or further tweak your model by double clicking on the model in the Solution Explorer.  Time to use the model! Writing a LINQ Query Programming against the domain model is very simple using LINQ. Just set a reference to the model (line 12 of the code below) and write a standard LINQ statement (lines 14-16).  (OpenAccess users: notice the you dont need any using statements for OpenAccess or an IObjectScope, just raw LINQ against your model.) 1: using System; 2: using System.Linq; 3: //no need for anOpenAccess using statement 4:   5: namespace ConsoleApplication3 6: { 7: class Program 8: { 9: static void Main(string[] args) 10: { 11: //a reference tothe data context 12: NorthwindEntityDiagrams dat = new NorthwindEntityDiagrams(); 13: //LINQ Statement 14: var result = from c in dat.Customers 15: where c.Country == "Germany" 16: select c; 17:   18: //Print out the company name 19: foreach (var cust in result) 20: { 21: Console.WriteLine("CompanyName: " + cust.CompanyName); 22: } 23: //keep the consolewindow open 24: Console.Read(); 25: } 26: } 27: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Lines 19-24 loop through the result of our LINQ query and displays the results. Thats it! All of the super powerful features of OpenAccess are available to you to further enhance your experience, however, in most cases this is all you need. In future posts I will show how to use the Visual Designer with some other scenarios. Stay tuned. Enjoy! Technorati Tags: Telerik,OpenAccess,LINQ Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Checksum Transformation

    The Checksum Transformation computes a hash value, the checksum, across one or more columns, returning the result in the Checksum output column. The transformation provides functionality similar to the T-SQL CHECKSUM function, but is encapsulated within SQL Server Integration Services, for use within the pipeline without code or a SQL Server connection. As featured in The Microsoft Data Warehouse Toolkit by Joy Mundy and Warren Thornthwaite from the Kimbal Group. Have a look at the book samples especially Sample package for custom SCD handling. All input columns are passed through the transformation unaltered, those selected are used to generate the checksum which is passed out through a single output column, Checksum. This does not restrict the number of columns available downstream from the transformation, as columns will always flow through a transformation. The Checksum output column is in addition to all existing columns within the pipeline buffer. The Checksum Transformation uses an algorithm based on the .Net framework GetHashCode method, it is not consistent with the T-SQL CHECKSUM() or BINARY_CHECKSUM() functions. The transformation does not support the following Integration Services data types, DT_NTEXT, DT_IMAGE and DT_BYTES. ChecksumAlgorithm Property There ChecksumAlgorithm property is defined with an enumeration. It was first added in v1.3.0, when the FrameworkChecksum was added. All previous algorithms are still supported for backward compatibility as ChecksumAlgorithm.Original (0). Original - Orginal checksum function, with known issues around column separators and null columns. This was deprecated in the first SQL Server 2005 RTM release. FrameworkChecksum - The hash function is based on the .NET Framework GetHash method for object types. This is based on the .NET Object.GetHashCode() method, which unfortunately differs between x86 and x64 systems. For that reason we now default to the CRC32 option. CRC32 - Using a standard 32-bit cyclic redundancy check (CRC), this provides a more open implementation. The component is provided as an MSI file, however to complete the installation, you will have to add the transformation to the Visual Studio toolbox by hand. This process has been described in detail in the related FAQ entry for How do I install a task or transform component?, just select Checksum from the SSIS Data Flow Items list in the Choose Toolbox Items window. Downloads The Checksum Transformation is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Checksum Transformation for SQL Server 2005 Checksum Transformation for SQL Server 2008 Checksum Transformation for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.27 – SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2010) SQL Server 2008 Version 2.0.0.27 – Fix for CRC-32 algorithm that inadvertently made it sort dependent. Fix for race condition which sometimes lead to the error Item has already been added. Key in dictionary: '79764919' . Fix for upgrade mappings between 2005 and 2008. (19 Oct 2010) Version 2.0.0.24 - SQL Server 2008 release. Introduces the new CRC-32 algorithm, which is consistent across x86 and x64.. The default algorithm is now CRC32. (29 Oct 2008) Version 2.0.0.6 - SQL Server 2008 pre-release. This version was released by mistake as part of the site migration, and had known issues. (20 Oct 2008) SQL Server 2005 Version 1.5.0.43 – Fix for CRC-32 algorithm that inadvertently made it sort dependent. Fix for race condition which sometimes lead to the error Item has already been added. Key in dictionary: '79764919' . (19 Oct 2010) Version 1.5.0.16 - Introduces the new CRC-32 algorithm, which is consistent across x86 and x64. The default algorithm is now CRC32. (20 Oct 2008) Version 1.4.0.0 - Installer refresh only. (22 Dec 2007) Version 1.4.0.0 - Refresh for minor UI enhancements. (5 Mar 2006) Version 1.3.0.0 - SQL Server 2005 RTM. The checksum algorithm has changed to improve cardinality when calculating multiple column checksums. The original algorithm is still available for backward compatibility. Fixed custom UI bug with Output column name not persisting. (10 Nov 2005) Version 1.2.0.1 - SQL Server 2005 IDW 15 June CTP. A user interface is provided, as well as the ability to change the checksum output column name. (29 Aug 2005) Version 1.0.0 - Public Release (Beta). (30 Oct 2004) Screenshot

    Read the article

  • [XSL-FO] Characters from other than English languages

    - by Lukasz Kurylo
    My client have departments in Europe Central and East, so there is highly possibility that in the generated pdfs there will be at least in the people names and/or surnames some specific characters for the country language.   With the XSL-FO we can use some out-of-the box fonts, e.g. the default is Times. We can change it for specific block of text or the entire document to other like Helvetica or Arial. All will be good to the moment that we use only an english alphabet. If we want to add e.g. some characters from polish or bulgarian language, in the *.fo file:         <fo:block >                 <fo:inline font-weight="bold">english: </fo:inline>                 <fo:inline font-weight="bold">yellow</fo:inline>       </fo:block>       <fo:block>                 <fo:inline font-weight="bold">polish: </fo:inline>                 <fo:inline font-weight="bold">zólty</fo:inline>       </fo:block>       <fo:block>                 <fo:inline font-weight="bold">russian: </fo:inline>                 <fo:inline font-weight="bold">??????</fo:inline>       </fo:block>       <fo:block>                 <fo:inline font-weight="bold">bulgarian: </fo:inline>                 <fo:inline font-weight="bold">????</fo:inline>       </fo:block>       <fo:block>                 <fo:inline font-weight="bold">english: </fo:inline>                 <fo:inline font-weight="bold">yellow</fo:inline>       </fo:block>       <fo:block>                 <fo:inline font-weight="bold">polish: </fo:inline>                 <fo:inline font-weight="bold"  font-family="Arial">zólty</fo:inline>       </fo:block>       <fo:block>                 <fo:inline font-weight="bold">russian: </fo:inline>                 <fo:inline font-weight="bold" font-family="Arial">??????</fo:inline>       </fo:block>       <fo:block>                 <fo:inline font-weight="bold">bulgarian: </fo:inline>                 <fo:inline font-weight="bold" font-family="Arial">????</fo:inline>       </fo:block>   The result can be diffrent from the expected depending on the selected font, e.g:                 As you can see Timer nor Arial work in this case.   The problem here is not related to XSL-FO, but rather to the renderer we are using. I have lost a lot of time to find a solution for the using by me XSL-FO –> PDF rendered to acquire these characters in my generated files. Fortunatelly all what have to be done it is to embed the font (or part of it) in the file(s) during rendering.   The renderer that I’m using it is an open source FO.NET.   For this one, the code to generate a pdf file looks that:   var fonet =  Fonet.FonetDriver.Make(); fonet.Render("source.fo", "result.pdf");   To emded the font in the pdf, we need to set the appropriate option to the driver:   fonet.Options = new Fonet.Render.Pdf.PdfRendererOptions() {       FontType = Fonet.Render.Pdf.FontType.Embed }; Right now, the pdf we get should look like this:               As you can see, the result for the Arial font looks exactly how it should, because this font has a characters included not only for the english language like the default Times, which we shouls avoid if we not generating a english-only documents.   This is worth to notice that in this situation the generated pdf file is quite large, it has more than 400 kb in size. This is of course because of embedding the entire font in it to make the document portable to systems, where the used font is not present. Instead on embedding the entire font, we can only embed the subset of used characters by changing the options to:   fonet.Options = new Fonet.Render.Pdf.PdfRendererOptions() {       FontType = Fonet.Render.Pdf.FontType.Subset };   Right now, this specific pdf is only 12 kb in size.

    Read the article

  • How Can I Safely Destroy Sensitive Data CDs/DVDs?

    - by Jason Fitzpatrick
    You have a pile of DVDs with sensitive information on them and you need to safely and effectively dispose of them so no data recovery is possible. What’s the most safe and efficient way to get the job done? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader HaLaBi wants to know how he can safely destroy CDs and DVDs with personal data on them: I have old CDs/DVDs which have some backups, these backups have some work and personal files. I always had problems when I needed to physically destroy them to make sure no one will reuse them. Breaking them is dangerous, pieces could fly fast and may cause harm. Scratching them badly is what I always do but it takes long time and I managed to read some of the data in the scratched CDs/DVDs. What’s the way to physically destroy a CD/DVD safely? How should he approach the problem? The Answer SuperUser contributor Journeyman Geek offers a practical solution coupled with a slightly mad-scientist solution: The proper way is to get yourself a shredder that also handles cds – look online for cd shredders. This is the right option if you end up doing this routinely. I don’t do this very often – For small scale destruction I favour a pair of tin snips – they have enough force to cut through a cd, yet are blunt enough to cause small cracks along the sheer line. Kitchen shears with one serrated side work well too. You want to damage the data layer along with shearing along the plastic, and these work magnificently. Do it in a bag, cause this generates sparkly bits. There’s also the fun, and probably dangerous way – find yourself an old microwave, and microwave them. I would suggest doing this in a well ventilated area of course, and not using your mother’s good microwave. There’s a lot of videos of this on YouTube – such as this (who’s done this in a kitchen… and using his mom’s microwave). This results in a very much destroyed cd in every respect. If I was an evil hacker mastermind, this is what I’d do. The other options are better for the rest of us. Another contributor, Keltari, notes that the only safe (and DoD approved) way to dispose of data is total destruction: The answer by Journeyman Geek is good enough for almost everything. But oddly, that common phrase “Good enough for government work” does not apply – depending on which part of the government. It is technically possible to recover data from shredded/broken/etc CDs and DVDs. If you have a microscope handy, put the disc in it and you can see the pits. The disc can be reassembled and the data can be reconstructed — minus the data that was physically destroyed. So why not just pulverize the disc into dust? Or burn it to a crisp? While technically, that would completely eliminate the data, it leaves no record of the disc having existed. And in some places, like DoD and other secure facilities, the data needs to be destroyed, but the disc needs to exist. If there is a security audit, the disc can be pulled to show it has been destroyed. So how can a disc exist, yet be destroyed? Well, the most common method is grinding the disc down to destroy the data, yet keep the label surface of the disc intact. Basically, it’s no different than using sandpaper on the writable side, till the data is gone. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Books are Dead! Long Live the Books!

    - by smisner
    We live in interesting times with regard to the availability of technical material. We have lots of free written material online in the form of vendor documentation online, forums, blogs, and Twitter. And we have written material that we can buy in the form of books, magazines, and training materials. Online videos and training – some free and some not free – are also an option. All of these formats are useful for one need or another. As an author, I pay particular attention to the demand for books, and for now I see no reason to stop authoring books. I assure you that I don’t get rich from the effort, and fortunately that is not my motivation. As someone who likes to refer to books frequently, I am still a big believer in books and have evidence from book sales that there are others like me. If I can do my part to help others learn about the technologies I work with, I will continue to produce content in a variety of formats, including books. (You can view a list of all of my books on the Publications page of my site and my online training videos at Pluralsight.) As a consumer of technical information, I prefer books because a book typically can get into a topic much more deeply than a blog post, and can provide more context than vendor documentation. It comes with a table of contents and a (hopefully accurate) index that helps me zero in on a topic of interest, and of course I can use the Search feature in digital form. Some people suggest that technology books are outdated as soon as they get published. I guess it depends on where you are with technology. Not everyone is able to upgrade to the latest and greatest version at release. I do assume, however, that the SQL Server 7.0 titles in my library have little value for me now, but I’m certain that the minute I discard the book, I’m going to want it for some reason! Meanwhile, as electronic books overtake physical books in sales, my husband is grateful that I can continue to build my collection digitally rather than physically as the books have a way of taking over significant square footage in our house! Blog posts, on the other hand, are useful for describing the scenarios that come up in real-life implementations that wouldn’t fit neatly into a book. As many years that I have working with the Microsoft BI stack, I still run into new problems that require creative thinking. Likewise, people who work with BI and other technologies that I use share what they learn through their blogs. Internet search engines help us find information in blogs that simply isn’t available anywhere else. Another great thing about blogs, also, is the connection to community and the dialog that can ensue between people with common interests. With the trend towards electronic formats for books, I imagine that we’ll see books continue to adapt to incorporate different forms of media and better ways to keep the information current. At the moment, I wish I had a better way to help readers with my last two Reporting Services books. In the case of the Microsoft® SQL Server™ 2005 Reporting Services Step by Step book, I have heard many cases of readers having problems with the sample database that shipped on CD – either the database was missing or it was corrupt. So I’ve provided a copy of the database on my site for download from http://datainspirations.com/uploads/rs2005sbsDW.zip. Then for the Microsoft® SQL Server™ 2008 Reporting Services Step by Step book, we decided to avoid the database problem by using the AdventureWorks2008 samples that Microsoft published on Codeplex (although code samples are still available on CD). We had this silly idea that the URL for the download would remain constant, but it seems that expectation was ill-founded. Currently, the sample database is found at http://msftdbprodsamples.codeplex.com/releases/view/37109 but I have no idea how long that will remain valid. My latest books (#9 and #10 which are milestones I never anticipated), Building Integrated Business Intelligence Solutions with SQL Server 2008 R2 and Office 2010 (McGraw Hill, 2011) and Business Intelligence in Microsoft SharePoint 2010 (Microsoft Press, 2011), will not ship with a CD, but will provide all code samples for download at a site maintained by the respective publishers. I expect that the URLs for the downloads for the book will remain valid, but there are lots of references to other sites that can change or disappear over time. Does that mean authors shouldn’t make reference to such sites? Personally, I think the benefits to be gained from including links are greater than the risks of the links becoming invalid at some point. Do you think the time for technology books has come to an end? Is the delivery of books in electronic format enough to keep them alive? If technological barriers were no object, what would make a book more valuable to you than other formats through which you can obtain information?

    Read the article

  • JustCode Provides Reflector Alternative

    - by Joe Mayo
    If you've been a loyal Reflector user, you've probably been exposed to the debacle surrounding RedGate's decision to no longer offer a free version.  Since then, the race has begun for a replacement with a provider that would stand by their promises to the community.  Mono has an ongoing free alternative, which has been available for a long time.  However, other vendors are stepping up to the plate, with their own offerings. If Not Reflector, Then What? One of these vendors is Telerik.  In their recent Q1 2011 release of JustCode, Telerik offers a decompilation utility rivaling what we've become accustomed to in Reflector.  Not only does Telerik offer a usable replacement, but they've (in my opinion), produced a product that integrates more naturally with visual Studio than any other product ever has.  Telerik's decompilation process is so easy that the accompanying demo in this post is blindingly short (except for the presence of verbose narrative). If you want to follow along with this demo, you'll need to have Telerik JustCode installed.  If you don't have JustCode yet, you can buy it or download a trial at the Telerik Web site . A Tall Tale; Prove It! With JustCode, you can view code in the .NET Framework or any other 3rd party library (that isn't well obfuscated).  This demo depends on LINQ to Twitter, which you can download from CodePlex.com and create a reference or install the package online as described in my previous post on NuGet.  Regardless of the method, you'll have a project with a reference to LINQ to Twitter.  Use a Console Project if you want to follow along with this demo. Note:  If you've created a Console project, remember to ensure that the Target Framework is set to .NET Framework 4.  The default is .NET Framework 4 Client Profile, which doesn't work with LINQ to Twitter.  You can check by double-clicking the Properties folder on the project and inspecting the Target Framework setting. Next, you'll need to add some code to your program that you want to inspect. Here, I add code to instantiate a TwitterContext, which is like a LINQ to SQL DataContext, but works with Twitter: var l2tCtx = new TwitterContext(); If you're following along add the code above to the Main method, which will look similar to this: using LinqToTwitter; namespace NuGetInstall { class Program { static void Main(string[] args) { var l2tCtx = new TwitterContext(); } } } The code above doesn't really do anything, but it does give something that I can show and demonstrate how JustCode decompilation works. Once the code is in place, click on TwitterContext and press the F12 (Go to Definition) key.  As expected, Visual Studio opens a metadata file with prototypes for the TwitterContext class.  Here's the result: Opening a metadata file is the normal way that Visual Studio works when navigating to the definition of a type where you don't have the code.  The scenario with TwitterContext happens because you don't have the source code to the file.  Visual Studio has always done this and you can experiment by selecting any .NET type, i.e. a string type, and observing that Visual Studio opens a metadata file for the .NET String type. The point I'm making here is that JustCode works the way Visual Studio works and you'll see how this can make your job easier. In the previous figure, you only saw prototypes associated with the code. i.e. Notice that the default constructor is empty.  Again, this is normal because Visual Studio doesn't have the ability to decompile code.  However, that's the purpose of this post; showing you how JustCode fills that gap. To decompile code, right click on TwitterContext in the metadata file and select JustCode Navigate -> Decompile from the context menu.  The shortcut keys are Ctrl+1.  After a brief pause, accompanied by a progress window, you'll see the metadata expand into full decompiled code. Notice below how the default constructor now has code as opposed to the empty member prototype in the original metadata: And Why is This So Different? Again, the big deal is that Telerik JustCode decompilation works in harmony with the way that Visual Studio works.  The navigate to functionality already exists and you can use that, along with a simple context menu option (or shortcut key) to transform prototypes into decompiled code. Telerik is filling the the Reflector/Red Gate gap by providing a supported alternative to decompiling code.  Many people, including myself, used Reflector to decompile code when we were stuck with buggy libraries or insufficient documentation.  Now we have an alternative that's officially supported by a company with an excellent track record for customer (developer) service, Telerik.  Not only that, JustCode has several other IDE productivity tools that make the deal even sweeter. Joe

    Read the article

  • How to handle updated configuration when it's already been cloned for editing

    - by alexrussell
    Really sorry about the title that probably doesn't make much sense. Hopefully I can explain myself better here as it's something that's kinda bugged me for ages, and is now becoming a pressing concern as I write a bit of software with configuration. Most software comes with default configuration options stored in the app itself, and then there's a configuration file (let's say) that a user can edit. Once created/edited for the first time, subsequent updates to the application can not (easily) modify this configuration file for fear of clobbering the user's own changes to the default configuration. So my question is, if my application adds a new configurable parameter, what's the best way to aid discoverability of the setting and allow the user (developer) to override it as nicely as possible given the following constraints: I actually don't have a canonical default config in the application per se, it's more of a 'cascading filesystem'-like affair - the config template is stored in default/config.json and when the user wishes to edit the configuration, it's copied to user/config.json. If a user config is found it is used - there is no automatic overriding of a subset of keys, the whole new file is used and that's that. If there's no user config the default config is used. When a user wishes to edit the config they run a command to 'generate' it for them (which simply copies the config.json file from the default to the user directory). There is no UI for the configuration options as it's not appropriate to the userbase (think of my software as a library or something, the users are developers, the config is done in the user/config.json file). Due to my software being library-like there's no simple way to, on updating of the software, run some tasks automatically (so any ideas of look at the current config, compare to template config, add ing missing keys) aren't appropriate. The only solution I can think of right now is to say "there's a new config setting X" in release notes, but this doesn't seem ideal to me. If you want any more information let me know. The above specifics are not actually 100% true to my situation, but they represent the problem equally well with lower complexity. If you do want specifics, however, I can explain the exact setup. Further clarification of the type of configuration I mean: think of the Atom code editor. There appears to be a default 'template' config file somewhere, but as soon as a configuration option is edited ~/.atom/config.cson is generated and the setting goes in there. From now on is Atom is updated and gets a new configuration key, this file cannot be overwritten by Atom without a lot of effort to ensure that the addition/modification of the key does not clobber. In Atom's case, because there is a GUI for editing settings, they can get away with just adding the UI for the new setting into the UI to aid 'discoverability' of the new setting. I don't have that luxury. Clarification of my constraints and what I'm actually looking for: The software I'm writing is actually a package for a larger system. This larger system is what provides the configuration, and the way it works is kinda fixed - I just do a config('some.key') kinda call and it knows to look to see if the user has a config clone and if so use it, otherwise use the default config which is part of my package. Now, while I could make my application edit the user's configuration files (there is a convention about where they're stored), it's generally not done, so I'd like to live with the constraints of the system I'm using if possible. And it's not just about discoverability either, one large concern is that the addition of a configuration key won't actually work as soon as the user has their own copy of the original template. Adding the key to the template won't make a difference as that file is never read. As such, I think this is actually quite a big flaw in the design of the configuration cascading system and thus needs to be taken up with my upstream. So, thinking about it, based on my constraints, I don't think there's going to be a good solution save for either editing the user's configuration or using a new config file every time there are updates to the default configuration. Even the release notes idea from above isn't doable as, if the user does not follow the advice, suddenly I have a config key with no value (user-defined or default). So the new question is this: what is the general way to solve the problem of having a default configuration in template config files and allowing a user to make user-specific version of these in order to override the defaults? A per-key cascade (rather than per-file cascade) where the user only specifies their overrides? In this case, what happens if a configuration value is an array - do we replace or append to the default (or, more realistically, how does the user specify whether they wish to replace or append to)? It seems like configuration is kinda hard, so how is it solved in the wild?

    Read the article

  • Defaulting the HLSL Vertex and Pixel Shader Levels to Feature Level 9_1 in VS 2012

    - by Michael B. McLaughlin
    I love Visual Studio 2012. But this is not a post about that. This is a post about tweaking one particular parameter that I’ve found a bit annoying. Disclaimer: You will be modifying important MSBuild files. If you screw up you will break your build tools. And maybe your computer will catch fire. I’m not responsible. No warranties or guaranties of any sort. This info is provided “as is”. By default, if you add a new vertex shader or pixel shader item to a project, it will be set to build with shader profile 4.0_level_9_3. If you need 9_3 functionality, this is all well and good. But (especially for Windows Store apps) you really want to target the lowest shader profile possible so that your game will run on as many computers as possible. So it’s a good idea to default to 9_1. To do this you could add in new HLSL files via “Add->New Item->Visual C++->HLSL->______ Shader File (.hlsl)” and then edit the shader files’ properties to set them manually to use 9_1 via “Properties->HLSL Compiler->General->Shader Model”. This is fine unless you forget to do this once and then submit your game with 9_3 shaders instead of 9_1 shaders to the Windows Store or to some other game store. Then you’d wind up with either rejection or angry “this doesn’t work on my computer! ripoff!” messages. There’s another option though. In “Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\ItemTemplates\VC\HLSL\1033\VertexShader” (note the path might vary slightly for you if you are using a 32-bit system or have a non-ENU version of Visual Studio 2012) you will find a “VertexShader.vstemplate” file. If you open this file in a text editor (e.g. Notepad++), then inside the CustomParameters tag within the TemplateContent tag you should see a CustomParameter tag for the ShaderType, i.e.: <CustomParameter Name="$ShaderType$" Value="Vertex"/> On a new line, we are going to add another CustomParameter tag to the CustomParameters tag. It will look like this: <CustomParameter Name="$ShaderModel$" Value="4.0_level_9_1"/> such that we now have:     <CustomParameters>       <CustomParameter Name="$ShaderType$" Value="Vertex"/>       <CustomParameter Name="$ShaderModel$" Value="4.0_level_9_1"/>     </CustomParameters> You can then save the file (you will need to be an Administrator or have Administrator access). Back in the 1033 directory (or whatever the number is for your language), go into the “PixelShader” directory. Edit the “PixelShader.vstemplate” file and make the same change (note that this time $ShaderType$ is “Pixel” not “Vertex”; you shouldn’t be changing that line anyway, but if you were to just copy and replace the above four lines then you will wind up creating pixel shaders that the HLSL compiler would try to compile as vertex shaders, with all sort of weird errors as a result). Once you’ve added the $ShaderModel$ line to “PixelShader.vstemplate” and have saved it, everything should be done. Since Feature Level 9_1 and 9_3 don’t support any of the other shader types, those are set to default to their appropriate minimums already (Compute and Geometry are set to “4.0” and Domain and Hull are set to “5.0”, which are their respective minimums (though not all 4.0 cards support Compute shaders; they were an optional feature added with DirectX 10.1 and only became required for DirectX 11 hardware). In case you are wondering where these magic values come from, you can find them all in the “fxc.xml” file in the “\Program Files (x86)\MSBuild\Microsoft.CPP\v4.0\V110\1033” directory (or whatever your language number is; 1033 is ENU and various other product languages have their own respective numbers (see: http://msdn.microsoft.com/en-us/goglobal/bb964664.aspx ) such that Japanese is 1041 (for example), though for all I know MSBuild tasks might be 1033 for everyone). If, like me, you installed VS 2012 to a drive other than the C:\ drive, you will find the vstemplate files in the drive to which you installed VS 2012 (D:\ in my case) but you will find the fxc.xml file on the C:\ drive. You should not edit fxc.xml. You will almost definitely break things by doing that; it’s just something you can look through to see all the other options that the FXC task takes such that you could, if needed, add further CustomParameter tags if you wanted to default to other supported options. I haven’t tried any others though so I don’t have any advice on how to set them.

    Read the article

  • Airline mess - what a journey

    - by Mike Dietrich
    What a day, what a journey ... Flew this noon from Munich to Zuerich for catch my ongoing flight to San Francisco with Swiss. And that day did start very well as Lufthansa messed up the connection flight by 42 minutes for a 35 minute flight. And as I was obviously the only passenger connection to San Francisco nobody picked me up at the airplane to bring me directly to my connection as Swiss did for the 8 passengers connection to Miami. So I missed my flight. What a start - and many thanks to Lufthansa. I was not the only one missing a connection as Lufthansa/Swiss had canceled the flight before due to "technical problems". In Zuerich Swiss did rebook me via Frankfurt with Lufthansa to board a United Airlines flight to San Francisco. "Ouch" I thought. I had my share of experience with United already as they've messed up my luggage on the way to San Francisco some years ago and it took them five (!!!) days to fly my bag over and deliver it. But actually it was the only option today. So I said "Yes". A big mistake as I've learned later on. The Frankfurt flight was delayed as well "due to a late incoming aircraft". But there was plenty of time. And I went to the Swiss counter at the gate and let them check if my baggage is on that flight to Frankfurt. They've said "Yes". Boarding the plane with a delay of 45 minutes (the typical Lufthansa delay these days) I spotted my Rimowa trolley right next to the plane on the airfield. So I was sure that it will be send to Frankfurt. In Frankfurt I went to the United counter once it did open - had to go through the passport check they do for US flights as well - and they've said "Yes, your luggage is with us". Well ... Arriving in San Francisco with just a bit of a some minutes delay and a very fast immigration procedure I saw the first bags with Priority tags getting pushed to the baggage claim - but mine was not there. I did wait ... and wait ... and wait. Well, thanks United, you did it again!!! I flew twice in the past years United Airlines - and in both cases they've messed up my luggage on the way to San Francisco. How lovely is that ... Now the real fun started again as the lady at the "Lost and Found" counter for luggage spotted my luggage in her system in Zuerich - and told me it's supposed to be sent with LH1191 to Frankfurt on Sept 27. But this was yesterday in Europe - it's already Sept 28 - and I saw my luggage in front of the airplane. So I'd suppose it's in Frankfurt already. But what could she do? Nothing but doing the awful paperwork. And "No Mr Dietrich, we don't call international numbers". Thank you, United. Next time I'll try to get a contract for a US land line in advance. They can't even tell you which plane will bring your luggage. It may be tomorrow with UA flight arriving around 4pm in SFO. I'm looking forward to some hours in the wonderful United Airlines call center waiting line. Last time I did spend 60-90 minutes every day until I got my luggage. If it takes again that long then OOW will be over by then. I love airline travel - and especially with United Airlines. And by the way ... they gave us these nice fancy packages during the flight:  That looks good - what's in that box??? Yes, really ... a bag of potato chips. Pure fat - very healthy.  I doubt that I'll ever fly United Airlines again!!!

    Read the article

  • EM CLI, diving in and beyond!

    - by Maureen Byrne
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Doing more in less time… Isn’t that what we all strive to do? With this in mind, I put together two screen watches on Oracle Enterprise Manager 12c command line interface, or EM CLI as it is also known. There is a wealth of information on any topic that you choose to read about, from manual pages to coding documents…might I even say blog posts? In our busy lives it is so nice to just sit back with a short video, watch and learn enough to dive in. Doing more in less time, is the essence of EM CLI. It enables you to script fundamental and complex administrative tasks in an elegant way, thanks to the Jython scripting language. Repetitive tasks can be scripted and reused again and again. Sure, a Graphical User Interface provides a more intuitive step by step approach to tasks, and it provides a way of quickly becoming familiar with a product and its many features, and it is definitely the way to go when viewing performance data and historical trending…but for repetitive and complex tasks, scripting is the way to go! Lets us take the everyday task of creating an administrator. Using EM CLI in interactive mode the command could look like this.. emcli>create_user(name='jan.doe', type='EXTERNAL_USER') This command creates an administrator called jan.doe which is an externally authenticated user, possibly LDAP or SSO, defined by the EXTERNAL_USER tag. The create_user procedure takes many arguments; see the documentation for more information. Now, where EM CLI really shines and shows power is in creating multiple users. Regardless of the number, tens or thousands, the effort is the same. With the use of a standard programming construct, a loop, you can place your create_user() procedure within it. Using a loop allows you to iterate through a previously created list, creating new users until the list is complete. Using EM CLI in Script mode, your Jython loop would look something like this… for user in list_of_users:       create_user(name=user, expire=’true’, password=’welcome123’) This Jython code snippet iterates through a previously defined list of names, list_of_users, and iterates through the list, taking each name, user in this case, and creates an administrator sets the password to welcome123, but forces the user to reset it when they first login. This is only one of over four hundred procedures created to expose Oracle Enterprise Manager 12c functionality in a powerful and programmatic way. It is a few months since we released EM CLI with scripting option. We are seeing many users adapt to this fun and powerful way of using Oracle Enterprise Manager 12c. What are the first steps? Watch these screen watches, and dive in. The first screen watch steps you through where and how to download and install and how to run your first few commands. The Second screen watch steps you through a few scripts. Next time, I am going to show you the basic building blocks to writing a Jython script to perform Oracle Enterprise Manager 12c administrative tasks. Join this growing group of EM CLI users…. Dive in! Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Seizing the Moment with Mobility

    - by Divya Malik
    Empowering people to work where they want to work is becoming more critical now with the consumerisation of technology. Employees are bringing their own devices to the workplace and expecting to be productive wherever they are. Sales people welcome the ability to run their critical business applications where they can be most effective which is typically on the road and when they are still with the customer. Oracle has invested many years of research in understanding customer's Mobile requirements. “The keys to building the best user experience were building in a lot of flexibility in ways to support sales, and being useful,” said Arin Bhowmick, Director, CRM, for the Applications UX team. “We did that by talking to and analyzing the needs of a lot of people in different roles.” The team studied real-life sales teams. “We wanted to study salespeople in context with their work,” Bhowmick said. “We studied all user types in the CRM world because we wanted to build a user interface and user experience that would cater to sales representatives, marketing managers, sales managers, and more. Not only did we do studies in our labs, but also we did studies in the field and in mobile environments because salespeople are always on the go.” Here is a recent post from Hernan Capdevila, Vice President, Oracle Fusion Apps which was featured on the Oracle Applications Blog.  Mobile devices are forcing a paradigm shift in the workplace – they’re changing the way businesses can do business and the type of cultures they can nurture. As our customers talk about their mobile needs, we hear them saying they want instant-on access to enterprise data so workers can be more effective at their jobs anywhere, anytime. They also are interested in being more cost effective from an IT point of view. The mobile revolution – with the idea of BYOD (bring your own device) – has added an interesting dynamic because previously IT was driving the employee device strategy and ecosystem. That's been turned on its head with the consumerization of IT. Now employees are figuring out how to use their personal devices for work purposes and IT has to figure out how to adapt. Blurring the Lines between Work and Personal Life My vision of where businesses will be five years from now is that our work lives and personal lives will be more interwoven together. In turn, enterprises will have to determine how to make employees’ work lives fit more into the fabric of their personal lives. And personal devices like smartphones are going to drive significant business value because they let us accomplish things very incrementally. I can be sitting on a train or in a taxi and be productive. At the end of any meeting, I can capture ideas and tasks or follow up with people in real time. Mobile devices enable this notion of seizing the moment – capitalizing on opportunities that might otherwise have slipped away because we're not connected. For the industry shapers out there, this is game changing. The lean and agile workforce is definitely the future. This notion of the board sitting down with the executive team to lay out strategic objectives for a three- to five-year plan, bringing in HR to determine how they're going to staff the strategic activities, kicking off the execution, and then revisiting the plan in three to five years to create another three- to five-year plan is yesterday's model. Businesses that continue to approach innovating in that way are in the dinosaur age. Today it's about incremental planning and incremental execution, which requires a lot of cohesion and synthesis within the workforce. There needs to be this interweaving notion within the workforce about how ideas cascade down, how people engage, how they stay connected, and how insights are shared. How to Survive and Thrive in Today’s Marketplace The notion of Facebook isn’t new. We lived it pre-Internet days with America Online and Prodigy – Facebook is just the renaissance of these services in a more viral and pervasive way. And given the trajectory of the consumerization of IT with people bringing their personal tooling to work, the enterprise has no option but to adapt. The sooner that businesses realize this from a top-down point of view the sooner that they will be able to really drive significant innovation and adapt to the marketplace. There are a small number of companies right now (I think it's closer to 20% rather than 80%, but the number is expanding) that are able to really innovate in this incremental marketplace. So from a competitive point of view, there's no choice but to be social and stay connected. By far the majority of users on Facebook and LinkedIn are mobile users – people on iPhones, smartphones, Android phones, and tablets. It's not the couch people, right? It's the on-the-go people – those people at the coffee shops. Usually when you're sitting at your desk on a big desktop computer, typically you have better things to do than to be on Facebook. This is a topic I'm extremely passionate about because I think mobile devices are game changing. Mobility delivers significant value to businesses – it also brings dramatic simplification from a functional point of view and transforms our work life experience. Hernan Capdevila Vice President, Oracle Applications Development

    Read the article

  • Using Stored Procedures in SSIS

    - by dataintegration
    The SSIS Data Flow components: the source task and the destination task are the easiest way to transfer data in SSIS. Some data transactions do not fit this model, they are procedural tasks modeled as stored procedures. In this article we show how you can call stored procedures available in RSSBus ADO.NET Providers from SSIS. In this article we will use the CreateJob and the CreateBatch stored procedures available in RSSBus ADO.NET Provider for Salesforce, but the same steps can be used to call a stored procedure in any of our data providers. Step 1: Open Visual Studio and create a new Integration Services Project. Step 2: Add a new Data Flow Task to the Control Flow window. Step 3: Open the Data Flow Task and add a Script Component to the data flow pane. A dialog box will pop-up allowing you to select the Script Component Type: pick the source type as we will be outputting columns from our stored procedure. Step 4: Double click the Script Component to open the editor. Step 5: In the "Inputs and Outputs" settings, enter all the columns you want to output to the data flow. Ensure the correct data type has been set for each output. You can check the data type by selecting the output and then changing the "DataType" property from the property editor. In our example, we'll add the column JobID of type String. Step 6: Select the "Script" option in the left-hand pane and click the "Edit Script" button. This will open a new Visual Studio window with some boiler plate code in it. Step 7: In the CreateOutputRows() function you can add code that executes the stored procedures included with the Salesforce Component. In this example we will be using the CreateJob and CreateBatch stored procedures. You can find a list of the available stored procedures along with their inputs and outputs in the product help. //Configure the connection string to your credentials String connectionString = "Offline=False;user=myusername;password=mypassword;access token=mytoken;"; using (SalesforceConnection conn = new SalesforceConnection(connectionString)) { //Create the command to call the stored procedure CreateJob SalesforceCommand cmd = new SalesforceCommand("CreateJob", conn); cmd.CommandType = CommandType.StoredProcedure; cmd.Parameters.Add(new SalesforceParameter("ObjectName", "Contact")); cmd.Parameters.Add(new SalesforceParameter("Action", "insert")); //Execute CreateJob //CreateBatch requires JobID as input so we store this value for later SalesforceDataReader rdr = cmd.ExecuteReader(); String JobID = ""; while (rdr.Read()) { JobID = (String)rdr["JobID"]; } //Create the command for CreateBatch, for this example we are adding two new rows SalesforceCommand batCmd = new SalesforceCommand("CreateBatch", conn); batCmd.CommandType = CommandType.StoredProcedure; batCmd.Parameters.Add(new SalesforceParameter("JobID", JobID)); batCmd.Parameters.Add(new SalesforceParameter("Aggregate", "<Contact><Row><FirstName>Bill</FirstName>" + "<LastName>White</LastName></Row><Row><FirstName>Bob</FirstName><LastName>Black</LastName></Row></Contact>")); //Execute CreateBatch SalesforceDataReader batRdr = batCmd.ExecuteReader(); } Step 7b: If you had specified output columns earlier, you can now add data into them using the UserComponent Output0Buffer. For example, we had set an output column called JobID of type String so now we can set a value for it. We will modify the DataReader that contains the output of CreateJob like so:. while (rdr.Read()) { Output0Buffer.AddRow(); JobID = (String)rdr["JobID"]; Output0Buffer.JobID = JobID; } Step 8: Note: You will need to modify the connection string to include your credentials. Also ensure that the System.Data.RSSBus.Salesforce assembly is referenced and include the following using statements to the top of the class: using System.Data; using System.Data.RSSBus.Salesforce; Step 9: Once you are done editing your script, save it, and close the window. Click OK in the Script Transformation window to go back to the main pane. Step 10: If had any outputs from the Script Component you can use them in your data flow. For example we will use a Flat File Destination. Configure the Flat File Destination to output the results to a file, and you should see the JobId in the file. Step 11: Your project should be ready to run.

    Read the article

  • A Visual Studio Release Grows in Brooklyn

    - by andrewbrust
    Yesterday, Microsoft held its flagship launch event for Office 2010 in Manhattan.  Today, the Redmond software company is holding a local launch event for Visual Studio (VS) 2010, in Brooklyn.  How come information workers get the 212 treatment and developers are relegated to 718? Well, here’s the thing: the Brooklyn Marriott is actually a great place for an event, but you need some intimate knowledge of New York City to know that.  NBC’s Studio 8H, where the Office launch was held yesterday (and from where SNL is broadcast) is a pretty small venue, but you’d need some inside knowledge to recognize that.  Likewise, while Office 2010 is a product whose value is apparent.  Appreciating VS 2010’s value takes a bit more savvy.  Setting aside its year-based designation, this release of VS, counting the old Visual Basic releases, is the 10th version of the product.  How can a developer audience get excited about an integrated development environment when it reaches double-digit version numbers?  Well, it can be tough.  Luckily, Microsoft sent Jay Schmelzer, a Group Program Manager from the Visual Studio team in Redmond, to come tell the Brooklyn audience why they should be excited. Turns out there’s a lot of reasons.  Support fro SharePoint development is a big one.  In previous versions of VS, that support has been anemic, at best.  Shortage of SharePoint developers is a huge issue in the industry, and this should help.  There’s also built in support for Windows Azure (Microsoft’s cloud platform) and, through a download, support for the forthcoming Windows Phone 7 platform.  ASP.NET MVC, a “close-to-the-metal” Web development option that does away with the Web Forms abstraction layer, has a first-class presence in VS.  So too does jQuery, the Open Source environment that makes JavaScript development a breeze.  The jQuery support is so good that Microsoft now contributes to that Open Source project and offers IntelliSense support for it in the code editor. Speaking of the VS code editor, it now supports multi-monitor setups, zoom-in, and block selection.  If you’re not a developer, this may sound confusing and minute.  I’ll just say that for people who are developers these are little things that really contribute to productivity, and that translates into lower development costs. The really cool demo, though, was around Visual Studio 2010’s new debugging features.  This stuff is hard to showcase, but I believe it’s truly breakthrough technology: imagine being able to step backwards in time to see what might have caused a bug.  Cool?  Now imagine being able to do that, even if you weren’t the tester and weren’t present while the testing was being done.  Then imagine being able to see a video screen capture of what the tester was doing with your app when the bug occurred.  VS 2010 allows all that.  This could be the demise of the IWOMM (“it works on my machine”) syndrome. After the keynote, I asked Schmelzer if any of Microsoft’s competitors have debugging tools that come close to VS 2010’s.  His answer was an earnest “we don’t think so.”  If that’s true, that’s a big deal, and a huge advantage for developer teams who adopt it.  It will make software development much cheaper and more efficient.  Kind of like holding a launch event at the Brooklyn Marriott instead of 30 Rock in Manhattan! VS 2010 (version 10) and Office 2010 (version 14) aren’t the only new product versions Microsoft is releasing right now.  There’s also SQL Server 2008 R2 (version 10.5), Exchange 2010 (version 8, I believe), SharePoint 2010 (version 4) and, of course, Windows 7.  With so many new versions at such levels of maturity, I think it’s fair to say Microsoft has reached middle-age.  How does a company stave off a potential mid-life crisis, especially when with young Turks like Google coming along and competing so fiercely?  Hard to say.  But if focusing on core value, including value that’s hard to play into a sexy demo, is part oft the answer, then Microsoft’s doing OK.  And if some new tricks, like Windows Phone 7, can gain some traction, that might round things out nicely. Are the legacy products old tricks, or are they revised classics?  I honestly don’t know, because it’s the market’s prerogative to pass that judgement.  I can say this though: based on today’s show, I think Microsoft’s been doing its homework.

    Read the article

  • Exalogic 2.0.1 Tea Break Snippets - Modifying the Default Shipped Template

    - by The Old Toxophilist
    Having installed your Exalogic Virtual environment by default you have a single template which can be used to create your vServers. Although this template is suitable for creating simple test or development vServers it is recommended that you look at creating your own custom vServers that match the environment you wish to build and deploy. Therefore this Tea Time Snippet will take you through the simple process of modifying the standard template. Before You Start To edit the template you will need the Oracle ModifyJeos Utility which can be downloaded from the eDelivery Site. Once the ModifyJeos Utility has been downloaded we can install the rpms onto either an existing vServer or one of the Control vServers. rpm -ivh ovm-modify-jeos-1.0.1-10.el5.noarch.rpm rpm -ivh ovm-template-config-1.0.1-5.el5.noarch.rpm Alternatively you can install the modify jeos packages on a none Exalogic OEL installation or a VirtualBox image. If you are doing this, assuming OEL 5u8, you will need the following rpms. rpm -ivh ovm-modify-jeos-1.0.1-10.el5.noarch.rpm rpm -ivh ovm-el5u2-xvm-jeos-1.0.1-5.el5.i386.rpm rpm -ivh ovm-template-config-1.0.1-5.el5.noarch.rpm Base Template If you have installed the modify onto a vServer running on the Exalogic then simply mount the /export/common/images from the ZFS storage and you will be able to find the el_x2-2_base_linux_guest_vm_template_2.0.1.1.0_64.tgz (or similar depending which version you have) template file. Alternatively the latest can be downloaded from the eDelivery Site. Now we have the Template tgz we will need the extract it as follows: tar -zxvf  el_x2-2_base_linux_guest_vm_template_2.0.1.1.0_64.tgz This will create a directory called BASE which will contain the System.img (VServer image) and vm.cfg (VServer Config information). This directory should be renamed to something more meaning full that indicates what we have done to the template and then the Simple name / name in the vm.cfg editted for the same reason. Modifying the Template Resizing Root File System By default the shipped template has a root size of 4 GB which will leave a vServer created from it running at 90% full on the root disk. We can simply resize the template by executing the following: modifyjeos -f System.img -T <New Size MB>) For example to imcrease the default 4 GB to 40 GB we would execute: modifyjeos -f System.img -T 40960) Resizing Swap We can modify the size of the swap space within a template by executing the following: modifyjeos -f System.img -S <New Size MB>) For example to increase the swap from the default 512 MB to 4 GB we would execute: modifyjeos -f System.img -S 4096) Changing RPMs Adding RPMs To add RPMs using modifyjeos, complete the following steps: Add the names of the new RPMs in a list file, such as addrpms.lst. In this file, you should list each new RPM in a separate line. Ensure that all of the new RPMs are in a single directory, such as rpms. Run the following command to add the new RPMs: modifyjeos -f System.img -a <path_to_addrpms.lst> -m <path_to_rpms> -nogpg Where <path_to_addrpms.lst> is the path to the location of the addrpms.lst file, and <path_to_rpms> is the path to the directory that contains the RPMs. The -nogpg option eliminates signature check on the RPMs. Removing RPMs To remove RPM s using modifyjeos, complete the following steps: Add the names of the RPMs (the ones you want to remove) in a list file, such as removerpms.lst. In this file, you should list each RPM in a separate line. The Oracle Exalogic Elastic Cloud Administrator's Guide provides a list of all RPMs that must not be removed from the vServer. Run the following command to remove the RPMs: modifyjeos -f System.img -e <path_to_removerpms.lst> Where <path_to_removerpms.lst> is the path to the location of the removerpms.lst file. Mounting the System.img For all other modifications that are not supported by the modifyjeos command (adding you custom yum repositories, pre configuring NTP, modify default NFSv4 Nobody functionality, etc) we can mount the System.img and access it directly. To facititate quick and easy mounting/unmounting of the System.img I have put together the simple scripts below. MountSystemImg.sh #!/bin/sh # The script assumes it's being run from the directory containing the System.img # Export for later i.e. during unmount export LOOP=`losetup -f` export SYSTEMIMG=/mnt/elsystem # Make Temp Mount Directory mkdir -p $SYSTEMIMG # Create Loop for the System Image losetup $LOOP System.img kpartx -a $LOOP mount /dev/mapper/`basename $LOOP`p2 $SYSTEMIMG #Change Dir into mounted Image cd $SYSTEMIMG UnmountSystemImg.sh #!/bin/sh # The script assumes it's being run from the directory containing the System.img # Assume the $LOOP & $SYSTEMIMG exist from a previous run on the MountSystemImg.sh umount $SYSTEMIMG kpartx -d $LOOP losetup -d $LOOP Packaging the Template Once you have finished modifying the template it can be simply repackaged and then imported into EMOC as described in "Exalogic 2.0.1 Tea Break Snippets - Importing Public Server Template". To do this we will simply cd to the directory above that containing the modified files and execute the following: tar -zcvf <New Template Directory> <New Template Name>.tgz The resulting.tgz file can be copied to the images directory on the ZFS and uploadd using the IB network. This entry was originally posted on the The Old Toxophilist Site.

    Read the article

  • Silverlight 4 Twitter Client - Part 2

    - by Max
    We will create a few classes now to help us with storing and retrieving user credentials, so that we don't ask for it every time we want to speak with Twitter for getting some information. Now the class to sorting out the credentials. We will have this class as a static so as to ensure one instance of the same. This class is mainly going to include a getter setter for username and password, a method to check if the user if logged in and another one to log out the user. You can get the code here. Now let us create another class to facilitate easy retrieval from twitter xml format results for any queries we make. This basically involves just creating a getter setter for all the values that you would like to retrieve from the xml document returned. You can get the format of the xml document from here. Here is what I've in my Status.cs data structure class. using System; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Ink; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes;  namespace MaxTwitter.Classes { public class Status { public Status() {} public string ID { get; set; } public string Text { get; set; } public string Source { get; set; } public string UserID { get; set; } public string UserName { get; set; } } }  Now let us looking into implementing the Login.xaml.cs, first thing here is if the user is already logged in, we need to redirect the user to the homepage, this we can accomplish using the event OnNavigatedTo, which is fired when the user navigates to this particular Login page. Here you utilize the navigate to method of NavigationService to goto a different page if the user is already logged in. if (GlobalVariable.isLoggedin())         this.NavigationService.Navigate(new Uri("/Home", UriKind.Relative));  On the submit button click event, add the new event handler, which would save the perform the WebClient request and download the results as xml string. WebRequest.RegisterPrefix("https://", System.Net.Browser.WebRequestCreator.ClientHttp);  The following line allows us to create a web client to create a web request to a url and get back the string response. Something that came as a great news with SL 4 for many SL developers.   WebClient myService = new WebClient(); myService.AllowReadStreamBuffering = true; myService.UseDefaultCredentials = false; myService.Credentials = new NetworkCredential(TwitterUsername.Text, TwitterPassword.Password);  Here in the following line, we add an event that has to be fired once the xml string has been downloaded. Here you can do all your XLINQ stuff.   myService.DownloadStringCompleted += new DownloadStringCompletedEventHandler(TimelineRequestCompleted);   myService.DownloadStringAsync(new Uri("https://twitter.com/statuses/friends_timeline.xml"));  Now let us look at implementing the TimelineRequestCompleted event. Here we are not actually using the string response we get from twitter, I just use it to ensure the user is authenticated successfully and then save the credentials and redirect to home page. public void TimelineRequestCompleted(object sender, System.Net.DownloadStringCompletedEventArgs e) { if (e.Error != null) { MessageBox.Show("This application must be installed first"); }  If there is no error, we can save the credentials to reuse it later.   else { GlobalVariable.saveCredentials(TwitterUsername.Text, TwitterPassword.Password); this.NavigationService.Navigate(new System.Uri("/Home", UriKind.Relative)); } } Ok so now login page is done. Now the main thing – running this application. This credentials stuff would only work, if the application is run out of the browser. So we need fiddle with a few Silverlioght project settings to enable this. Here is how:    Right click on Silverlight > properties then check the "Enable running application out of browser".    Then click on Out-Of-Browser settings and check "Require elevated trust…" option. That's it, all done to run. Now press F5 to run the application, fix the errors if any. Then once the application opens up in browser with the login page, right click and choose install.  Once you install, it would automatically run and you can login and can see that you are redirected to the Home page. Here are the files that are related to this posts. We will look at implementing the Home page, etc… in the next post. Please post your comments and feedbacks; it would greatly help me in improving my posts!  Thanks for your time, catch you soon.

    Read the article

  • Code refactoring with Visual Studio 2010 Part-4

    - by Jalpesh P. Vadgama
    I have been writing few post with code refactoring features in Visual Studio 2010. This post also will be part of series and this post will be last of the series. In this post I am going explain two features 1) Encapsulate Field and 2) Extract Interface. Let’s explore both features in details. Encapsulate Field: This is a nice code refactoring feature provides by Visual Studio 2010. With help of this feature we can create properties from the existing private field of the class. Let’s take a simple example of Customer Class. In that I there are two private field called firstName and lastName. Below is the code for the class. public class Customer { private string firstName; private string lastName; public string Address { get; set; } public string City { get; set; } } Now lets encapsulate first field firstName with Encapsulate feature. So first select that field and goto refactor menu in Visual Studio 2010 and click on Encapsulate Field. Once you click that a dialog box will appear like following. Now once you click OK a preview dialog box will open as we have selected preview reference changes. I think its a good options to check that option to preview code that is being changed by IDE itself. Dialog will look like following. Once you click apply it create a new property called FirstName. Same way I have done for the lastName and now my customer class code look like following. public class Customer { private string firstName; public string FirstName { get { return firstName; } set { firstName = value; } } private string lastName; public string LastName { get { return lastName; } set { lastName = value; } } public string Address { get; set; } public string City { get; set; } } So you can see that its very easy to create properties with existing fields and you don’t have to change anything there in code it will change all the stuff itself. Extract Interface: When you are writing software prototype and You don’t know the future implementation of that then its a good practice to use interface there. I am going to explain here that How we can extract interface from the existing code without writing a single line of code with the help of code refactoring feature of Visual Studio 2010. For that I have create a Simple Repository class called CustomerRepository with three methods like following. public class CustomerRespository { public void Add() { // Some code to add customer } public void Update() { //some code to update customer } public void Delete() { //some code delete customer } } In above class there are three method Add,Update and Delete where we are going to implement some code for each one. Now I want to create a interface which I can use for my other entities in project. So let’s create a interface from the above class with the help of Visual Studio 2010. So first select class and goto refactor menu and click Extract Interface. It will open up dialog box like following. Here I have selected all the method for interface and Once I click OK then it will create a new file called ICustomerRespository where it has created a interface. Just like following. Here is a code for that interface. using System; namespace CodeRefractoring { interface ICustomerRespository { void Add(); void Delete(); void Update(); } } Now let's see the code for the our class. It will also changed like following to implement the interface. public class CustomerRespository : ICustomerRespository { public void Add() { // Some code to add customer } public void Update() { //some code to update customer } public void Delete() { //some code delete customer } } Isn't that great we have created a interface and implemented it without writing a single line of code. Hope you liked it. Stay tuned for more.. Till that Happy Programming.

    Read the article

  • To My 24 Year Old Self, Wherever You Are&hellip;

    - by D'Arcy Lussier
    A decade is a milestone in one’s life, regardless of when it occurs. 2011 might seem like a weird year to mark a decade, but 2001 was a defining year for me. It marked my emergence into the technology industry, an unexpected loss of innocence, and triggered an ongoing struggle with faith and belief. Once you go through a valley, climbing the mountain and looking back over where you travelled, you can take in the entirety of the journey. Over the last 10 years I kept journals, and in this new year I took some time to review them. For those today that are me a decade ago, I share with you what I’ve gleamed from my experiences. Take it for what it’s worth, and safe travels on your own journeys through life. Life is a Performance-Based Sport Have confidence, believe you’re capable, but realize that life is a performance-based sport. Everything you get in life is based on whether you can show that you deserve it. Performance is also your best defense against personal attacks. Just make sure you know what standards you’re expected to hit and if people want to poke holes at you let them do the work of trying to find them. Sometimes performance won’t matter though. Good things will happen to bad people, and bad things to good people. What’s important is that you do the right things and ensure the good and bad even out in your own life. How you finish is just as important as how you start. Start strong, end strong. Respect is Your Most Prized Reward Respect is more important than status or ego. The formula is simple: Performing Well + Building Trust + Showing Dedication = Respect Focus on perfecting your craft and helping your team and respect will come. Life is a Team Sport Whatever aspect of your life, you can’t do it alone. You need to rely on the people around you and ensure you’re a positive aspect of their lives; even those that may be difficult or unpleasant. Avoid criticism and instead find ways to help colleagues and superiors better whatever environment you’re in (work, home, etc.). Don’t just highlight gaps and issues, but also come to the table with solutions. At the same time though, stand up for yourself and hold others accountable for the commitments they make to the team. A healthy team needs accountability. Give feedback early and often, and make it verbal. Issues should be dealt with immediately, and positives should be celebrated as they happen. Life is a Contact Sport Difficult moments will happen. Don’t run from them or shield yourself from experiencing them. Embrace them. They will further mold you and reveal who you will become. Find Your Tribe and Embrace Your Community We all need a tribe: a group of people that we gravitate to for support, guidance, wisdom, and friendship. Discover your tribe and immerse yourself in them. Don’t look for a non-existent tribe just to fill the need of belonging though that will leave you empty and bitter when they don’t meet your unrealistic expectations. Try to associate with people more experienced and more knowledgeable than you. You’ll always learn, and you’ll always remember you have much to learn. Put yourself out there, get involved with the community. Opportunities will present themselves. When we open ourselves up to be vulnerable, we also give others the chance to do the same. This helps us all to grow and help each other, it’s very important. And listen to your wife. (Easter *is* a romantic holiday btw, regardless of what you may think.) Don’t Believe Your Own Press Clippings (and by that I mean the ones you write) Until you have a track record of performance to refer to, any notions of grandeur are just that: notions. You lose your rookie status through trials and tribulations, not by the number of stamps in your passport. Be realistic about your own “experience and leadership” and be honest when you aren’t ready for something. And always remember: nobody really cares about you as much as you think they do. Don’t Let Assholes Get You Down The world isn’t evil, but there is evil in the world. Know the difference and don’t paint all people with the same brush. Do be wary of those that use personal beliefs to describe their business (i.e. “We’re a [religion] company”). What matters is the culture of the organization, and that will tell you the moral compass and what is truly valued. Don’t make someone or something a priority that only makes you an option. Life is unfair and enemies/opponents will succeed when you fail. Don’t waste your energy getting upset at this; the only one that will lose out is you. As mentioned earlier, nobody really cares about you as much as you think they do. Misc Ecclesiastes is bullshit. Everything is certainly *not* meaningless. Software development is about delivery, not the process. Having a great process means nothing if you don’t produce anything. Watch “The Weatherman” (“It’s not easy, but easy doesn’t enter into grownup life.”). Read Tony Dungee’s autobiography, even if you don’t like football, and even if you aren’t a Christian. Say no, don’t feel like you have to commit right away when someone asks you to.

    Read the article

  • WSS 3.0 to SharePoint 2010: Tips for delaying the Visual Upgrade

    - by Kelly Jones
    My most recent project has been to migrate a bunch of sites from WSS 3.0 (SharePoint 2007) to SharePoint Server 2010.  The users are currently working with WSS 3.0 and Office 2003, so the new ribbon based UI in 2010 will be completely new.  My client wants to avoid the new SharePoint 2010 look and feel until they’ve had time to train their users, so we’ve been testing the upgrades by keeping them with the 2007 user interface. Permission to perform the Visual Upgrade One of the first things we noticed was the default permissions for who was allowed to switch the UI from 2007 to 2010.  By default, site collection administrators and site owners can do this.  Since we wanted to more tightly control the timing of the new UI, I added a few lines to the PowerShell script that we are using to perform the migration.  This script creates the web application, sets the User Policy, and then does a Mount-SPDatabase to attach the old 2007 content database to the 2010 farm.  I added the following steps after the Mount-SPDatabase step: #Remove the visual upgrade option for site owners # it remains for Site Collection administrators foreach ($sc in $WebApp.Sites){ foreach ($web in $sc.AllWebs){ #Visual Upgrade permissions for the site/subsite (web) $web.UIversionConfigurationEnabled = $false; $web.Update(); } } These script steps loop through each Site Collection in a particular web application ($WebApp) and then it loops through each subsite ($web) in the Site Collection ($sc) and disables the Site Owner’s permission to perform the Visual Upgrade. This is equivalent to going to the Site Collection administrator settings page –> Visual Upgrade and selecting “Hide Visual Upgrade”. Since only IT people have Site Collection administrator privileges, this will allow IT to control the timing of the new 2010 UI rollout. Newly created subsites Our next issue was brought to our attention by SharePoint Joel’s blog post last week (http://www.sharepointjoel.com/Lists/Posts/Post.aspx?ID=524 ).  In it, he lists some updates about the 2010 upgrade, and his fourth point was one that I hadn’t seen yet: 4. If a 2007 upgraded site has not been visually upgraded, the sites created underneath it will look like 2010 sites – While this is something I’ve been aware of, I think many don’t realize how this impacts common look and feel for master pages, and how it impacts good navigation and UI. As well depending on your patch level you may see hanging behavior in the list picker. The site and list creation Silverlight control in Internet Explorer is looking for resources that don’t exist in the galleries in the 2007 site, and hence it continues to spin and spin and eventually time out. The work around is to upgrade to SP1, or use Chrome or Firefox which won’t attempt to render the Silverlight control. When the root site collection is a 2007 site and has it’s set of galleries and the children are 2010 sites there is some strange behavior linked to the way that the galleries work and pull from the parent. Our production SharePoint 2010 Farm has SP1 installed, as well as the December 2011 Cumulative Update, so I think the “hanging behavior” he mentions won’t affect us. However, since we want to control the roll out of the UI, we are concerned that new subsites will have the 2010 look and feel, no matter what the parent site has. Ok, time to dust off my developer skills. I first looked into using feature stapling, but I couldn’t get that to work (although I’m pretty sure I had everything wired up correctly).  Then I stumbled upon SharePoint 2010’s web events – a great way to handle this. Using Visual Studio 2010, I created a new SharePoint project and added a Web Event Receiver: In the Event Receiver class, I used the WebProvisioned method to check if the parent site is a 2007 site (UIVersion = 3), and if so, then set the newly created site to 2007:   /// <summary> /// A site was provisioned. /// </summary> public override void WebProvisioned(SPWebEventProperties properties) { base.WebProvisioned(properties);   try { SPWeb curweb = properties.Web;   if (curweb.ParentWeb != null) {   //check if the parent website has the 2007 look and feel if (curweb.ParentWeb.UIVersion == 3) { //since parent site has 2007 look and feel // we'll apply that look and feel to the current web curweb.UIVersion = 3; curweb.Update(); } } } catch (Exception) { //TODO: Add logging for errors } }   This event is part of a Feature that is scoped to the Site Level (Site Collection).  I added a couple of lines to my migration PowerShell script to activate the Feature for any site collections that we migrate. Plan Going Forward The plan going forward is to perform the visual upgrade after the users for a particular site collection have gone through 2010 training. If we need to do several site collections at once, we’ll use a PowerShell script to loop through each site collection to update the sites to 2010.  If it’s just one or two, we’ll be using the “Update All Sites” button on the Visual Upgrade page for Site Collection Administrators. The custom code for newly created sites won’t need to be changed, since it relies on the UI version of the parent site.  If the parent is 2010, then the new site will look 2010.

    Read the article

  • Silverlight 4 Twitter Client &ndash; Part 7

    - by Max
    Download this article as a PDF Welcome back :) This week we are going to look at something more exciting and a much required feature for any twitter client – auto refresh so as to show new status updates. We are going to achieve this using Silverlight 4 Timers and a bit and refresh our datagrid every 2 minutes to show new updates. We will do this so that we do only minimal request to the twitter api, so that twitter does not block us – there is a limit of 150 request an hour. Let us get started now. Also we will get the profile user id hyperlinked, so that when ever the user click on it, we will take them to their twitter page. Also it was a pain to always run this application by pressing F5, then it would open in a browser you would have to right click uninstall and install it again to see any changes. All this and yet we were not able to debug it :( Now there is a solution for this to run a silverlight application directly out of browser and yet have the debug feature. Super cool, here is how. Right on the Silverlight project and go to debug and then select the Out-Of-Browser application option and choose the *.Web project. Then just right click on the SL project and set as Startup Project. There you go, now every time you press F5, it will automatically run out of browser and still have the debug options. I go to know about this after some binging. Now let us jump to the core straight away. 1) To get the user id hyperlinked, we need to have a DataGridTemplateColumn and within that have a HyperLinkButton. The code for this will  be <data:DataGridTemplateColumn> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <HyperlinkButton Click="HyperlinkButton_Click" Content="{Binding UserName}" TargetName="_blank" ></HyperlinkButton> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> </data:DataGridTemplateColumn> 2) Now let us look at how we are getting this done by looking into HyperlinkButton_Click event handler. There we will dynamically set the NavigateUri to the twitter page. I tried to do this using some binding, eval like stuff as in ASP.NET, but no luck! private void HyperlinkButton_Click(object sender, RoutedEventArgs e) { HyperlinkButton hb = (HyperlinkButton)e.OriginalSource; hb.NavigateUri = new Uri("http://twitter.com/" + hb.Content.ToString(), UriKind.Absolute); } 3) Now we need to switch on our Timer right in the OnNavigated to event on our SL page. So we need to modify our OnNavigated event to some thing like below: protected override void OnNavigatedTo(NavigationEventArgs e) { image1.Source = new BitmapImage(new Uri(GlobalVariable.profileImage, UriKind.Absolute)); this.Title = GlobalVariable.getUserName() + " - Home"; if (!GlobalVariable.isLoggedin()) this.NavigationService.Navigate(new Uri("/Login", UriKind.Relative)); else { currentGrid = "Timeline-Grid"; TwitterCredentialsSubmit(); myDispatcherTimer.Interval = new TimeSpan(0, 0, 0, 60, 0); myDispatcherTimer.Tick += new EventHandler(Each_Tick); myDispatcherTimer.Start(); } } I use a global string – here it is currentGrid variable to indicate what is bound in the datagrid so that after every timer tick, I can rebind the latest data to it again. Like I will only rebind the friends timeline again if the data grid currently holds it and I’ll only rebind the respective list status again in the data grid, if already a list status is bound to the data grid. In the above timer code, its set to trigger the Each_Tick event handler every 1 minute (60 seconds). TimeSpan takes in (days, hours, minutes, seconds, milliseconds). 4) Now we need to set the list name in the currentGrid variable when a list button is clicked. So add the code line below to the list button event handler currentGrid = currentList = b.Content.ToString(); 5) Now let us see how Each_Tick event handler is implemented. public void Each_Tick(object o, EventArgs sender) { if (!currentGrid.Equals("Timeline-Grid")) getListStatuses(currentGrid); else { WebRequest.RegisterPrefix("https://", System.Net.Browser.WebRequestCreator.ClientHttp); WebClient myService = new WebClient(); myService.AllowReadStreamBuffering = true; myService.UseDefaultCredentials = false; myService.Credentials = new NetworkCredential(GlobalVariable.getUserName(), GlobalVariable.getPassword()); myService.DownloadStringCompleted += new DownloadStringCompletedEventHandler(TimelineRequestCompleted); myService.DownloadStringAsync(new Uri("https://twitter.com/statuses/friends_timeline.xml")); } } If the data grid hold friends timeline, I just use the same bit of code we had already to bind the friends timeline to the data grid. Copy Paste. But if it is some list timeline that is bound in the datagrid, I then call the getListStatus method with the currentGrid string which will actually be holding the list name. 6) I wanted to make the hyperlinks inside the status message as hyperlinks and when the user clicks on it, we can then open that link. I tried using a convertor and using a regex to recognize a url and wrap it up with a href, but that is not gonna work in silverlight textblock :( Anyways that convertor code is in the zip file. 7) You can get the complete project files from here. 8) Please comment below for your doubts, suggestions, improvements. I will try to reply as early as possible. Thanks for all your support. Technorati Tags: Silverlight 4,Datagrid,Twitter API,Silverlight Timer

    Read the article

  • Sass interface in HTML6 for upload files.

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/11/04/sass-interface-in-html6-for-upload-files.aspx[This post is about experiment & imagination] From Windows XP (ever last OS I tried) I have seen a feature that is about send file to pen drive and make shortcut on Desktop. In XP, Win7 (Win8 have this too, not removed) just select the file right click > send to and you can send this file to many places. In my menu it’s show me Skype because I have installed it. this skype confirm that we can add our own app here to make it more easy for user to send file in our app. Nowadays Many people use Cloud or online site to store the file. In case of html5 drag and drop you need to have site opened and have opened that page which is about file upload. You need to select all  and drag and drop. after drag and drop file is simply uploaded to server and site show you on list (if no error happen). but this file upload is seriously not worthy since I have to open the site when I do this operation.   Through this post I want to describe a feature that can make this thing better.  This API is simply called SASS FILE UPLOAD API Through This API when you surf the site and come into file upload page then the page will tell you that we also have SASS FILE API support. Enable it for better result.   How this work This API feature are activated on 2 basis. 1. Feature are disabled by default on site (or you can change it if it’s not) 2. This API allow specific site to upload the files. Files upload may have some rule. For example (minimum or maximum size of file to uploaded, which format the site allowed you to upload). In case of resume site you will be allowed to use .doc (according to code of site)   How browser recognize that Site have SASS service. In HTML source of  the site, the code have a meta tag similar to this <meta name=”sass-upload-api” path=”/upload.json”/> Remember that upload.json is one file that has define the value of many settings {   "cookie_name": "ck_file",   "maximum_allowed_perday": 24,   "allowed_file_extensions","*.png,*.jpg,*.jpeg,*.gif",   "method": [       {           "get": "file/get",           "routing":"/file/get/{fileName}"       },       {           "post": "file/post",           "routing":"/file/post/{fileName}"       },       {           "delete": "file/delete",           "routing":"/file/delete/{fileName}"       },         {           "put": "file/put",           "routing":"/file/put/{fileName}"       },        {           "all": "file/all",           "routing":"/file/all/{fileName}"       }    ] } cookie name is simply a cookie which should be stored in browser and define in json. we define the cookie_name so we can easily share then with service in Windows system. This cookie will be accessible with the service so it’s security based safe. other cookie will not be shared.   The cookie will be post,put, get from this location. The all location will be simply about showing a whole list of file. This will gave a treeview kind of json to show the directories on sever.   for example example.com if you have activated the API with this site then you will seen a send to option in your explorer.exe when you send you will got a windows open which folder you want to use to send the file. The windows will also describe the limit and how much you can upload. This thing never required site to opened. When you upload the file this will be uploaded through FTP protocol. FTP protocol are better for performance.   How this API make thing faster. Suppose you want to ask a question and want to post image. you just do it and get it ready when you open stackoverflow.com now stackoverflow will only tell you which file you want to put on your current question that you asking for. second use is about people use cloud app.   There is no need of drag and drop anymore. we just need to do it without drag and drop it. we doesn’t need to open the site either. This thing is still in experiment level. I will update this post when I got some progress on this API.

    Read the article

< Previous Page | 648 649 650 651 652 653 654 655 656 657 658 659  | Next Page >