Search Results

Search found 24015 results on 961 pages for 'tab key'.

Page 191/961 | < Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >

  • C# TabControl - is it possible to "disable" individual TabPages?

    - by CaldonCZE
    Is it somehow possible to disable one (or more) tabs of tab control? At some point I need to make user stay on the active tab and prevent him from leaving... I know I can disable the whole TabControl component, but that disables also all components on active tab... I also tried to use the Selecting method of TabControl: private void TabControl_Selecting(object sender, TabControlCancelEventArgs e) { e.Cancel = PreventTabSwitch; } This works, prevents user from switching (if PreventTabSwitch==true), but since all tabs look active and just don't react it's confusing... There is no Enabled property for individual tab pages, so I don't know what else to do... Thanks a lot for in advance for all tips.

    Read the article

  • IntelliJ Split Window Navigation

    - by jmquigley
    If I split the editor window (horizontal or vertical) into N tab groups, how do I switch/toggle from one tab group to another via the keyboard? If all of the tabs are in the same group you can switch from each tab easily (CTRL + right/left arrow), but when they're in separate tab groups I can't. I've searched through the key mappings and have not found one that seems to accomplish this. I know I can use the mouse, but I'm trying to find ways to avoid the mouse and stay with the keyboard. TIA for any help on this.

    Read the article

  • jQuery script works in Firefox but not in IE. Why am I not surprised?

    - by Ben Tew
    I'm working with the context of a CMS system and trying to turn seperate div's into tabs. You can see it at http://www.wtvynews4.com/test I've kludged together some code from a tutorial site. <script charset="utf-8" type="text/javascript"> jQuery(function() { //When page loads... $("div[ondblclick$='87119417']").attr("id", "87119417"); $("div[ondblclick$='87119482']").attr("id", "87119482"); $("div[ondblclick$='87119672']").attr("id", "87119672"); $("div[ondblclick$='87119727']").attr("id", "87119727"); $("div[ondblclick$='87119812']").attr("id", "87119812"); $("div[ondblclick$='87119417']").addClass("tab_content"); $("div[ondblclick$='87119482']").addClass("tab_content"); $("div[ondblclick$='87119672']").addClass("tab_content"); $("div[ondblclick$='87119727']").addClass("tab_content"); $("div[ondblclick$='87119812']").addClass("tab_content"); $(".tab_content").hide(); //Hide all content $("ul.morenewstabs li:first").addClass("active").show(); //Activate first tab $(".tab_content:first").show(); //Show first tab content //On Click Event $("ul.morenewstabs li").click(function() { $("ul.morenewstabs li").removeClass("active"); //Remove any "active" class $(this).addClass("active"); //Add "active" class to selected tab $(".tab_content").hide(); //Hide all tab content var activeTab = $(this).find("a").attr("href"); //Find the href attribute value to identify the active tab + content $(activeTab).show(); //Fade in the active ID content return false; }); }); </script> Everything works fine in Firefox but not IE. can you provide any assistance? When the page loads the attribute ID's and classes aren't assigned. I tried changing jQuery(function() { to $(document).ready(function() still no luck.

    Read the article

  • Passing contextual info to Views in ASP.NET MVC

    - by Andrey
    I wonder - what is the best way to supply contextual (i.e. not related to any particular view, but to all views at the same time) info to a view (or to master page)? Consider the following scenario. Suppose we have an app that supports multiple UI languages. User can switch them via UI's widgets (something like tabs at the top of the page). Each language is rendered as a separate tab. Tab for the current language should not be rendered. To address these requirements I'm planning to have a javascript piece that will hide current's language tab on the client. To do this, I need current's language tab Id on the client. So, I need some way of passing the Id to master page (for it to be 'fused' into the js script). The best thing I can think of is that all my ViewModels should inherit some ViewModeBase that has a field to hold current language tab Id. Then, whatever View I'm rendering, this Id will always be available for the master page's hiding script. However, I'm concerned that this ViewModelBase can potentially grow in an uncontrolled fashion as number of such pieces of contextual info (like current language) will grow.. Any ideas?

    Read the article

  • Global Variable problem

    - by riteshkumar1905
    Hi, I am new in iphone.I use a flag variable to play songs in avAudio player all songs are properly handeled with flag variable.we have two tabs in tab bar , i want that if any song playing then on other tab song info show.If we use that flag variable then i syncronize song info with song.But i can't access the value of flag on song info tab.I import the global file in song info file. Please Help me any one through which i define a global integer that var i can access in all project.

    Read the article

  • jquery tabs question

    - by badnaam
    The problem is every time a tab is activated, the cursor jumps to the top of the page, since the tab link points to a div and the page scrolls up to the top of the div. This creates a jumpy effect if the user has scrolled down a bit, while reading tab content. Is there anyway to prevent this?

    Read the article

  • Can a Java HashMap's size() be out of sync with its actual entries' size ?

    - by trix
    I have a Java HashMap called statusCountMap. Calling size() results in 30. But if I count the entries manually, it's 31 This is in one of my TestNG unit tests. These results below are from Eclipse's Display window (type code - highlight - hit Display Result of Evaluating Selected Text). statusCountMap.size() (int) 30 statusCountMap.keySet().size() (int) 30 statusCountMap.values().size() (int) 30 statusCountMap (java.util.HashMap) {40534-INACTIVE=2, 40526-INACTIVE=1, 40528-INACTIVE=1, 40492-INACTIVE=3, 40492-TOTAL=4, 40513-TOTAL=6, 40532-DRAFT=4, 40524-TOTAL=7, 40526-DRAFT=2, 40528-ACTIVE=1, 40524-DRAFT=2, 40515-ACTIVE=1, 40513-DRAFT=4, 40534-DRAFT=1, 40514-TOTAL=3, 40529-DRAFT=4, 40515-TOTAL=3, 40492-ACTIVE=1, 40528-TOTAL=4, 40514-DRAFT=2, 40526-TOTAL=3, 40524-INACTIVE=2, 40515-DRAFT=2, 40514-ACTIVE=1, 40534-TOTAL=3, 40513-ACTIVE=2, 40528-DRAFT=2, 40532-TOTAL=4, 40524-ACTIVE=3, 40529-ACTIVE=1, 40529-TOTAL=5} statusCountMap.entrySet().size() (int) 30 What gives ? Anyone has experienced this ? I'm pretty sure statusCountMap is not being modified at this point. There are 2 methods (lets call them methodA and methodB) that modify statusCountMap concurrently, by repeatedly calling incrementCountInMap. private void incrementCountInMap(Map map, Long id, String qualifier) { String key = id + "-" + qualifier; if (map.get(key) == null) { map.put(key, 0); } synchronized (map) { map.put(key, map.get(key).intValue() + 1); } } methodD is where I'm getting the issue. methodD has a TestNG @dependsOnMethods = { "methodA", "methodB" } so when methodD is executing, statusCountMap is pretty much static already. I'm mentioning this because it might be a bug in TestNG. I'm using Sun JDK 1.6.0_24. TestNG is testng-5.9-jdk15.jar Hmmm ... after rereading my post, could it be because of concurrent execution of outside-of-synchronized-block map.get(key) == null & map.put(key,0) that's causing this issue ?

    Read the article

  • Convert Markdown text to RTF, using Ruby and Pandoc?

    - by niteshade
    Playing with Ruby and Ruby-Pandoc. Seems like a nice tool, if I can get it to work. I'd like to convert some Markdown text (with embedded lists and other fanciness) to Rich Text. Here's the text I'm converting: Title === This is a paragraph. Hallelujah. Here comes a nested list. --- * List item 1 * List item 1.1 * List item 1.2 * List item 2 * List item 2.1 Here's my Ruby code... require 'pandoc-ruby' input = File.read(test.md) converter = PandocRuby.new(input, from: :markdown, to: :rtf) puts converter.convert ... which (after saving the output to a file) produces a document without anything but a title: Here's the code of the RTF file: {\pard \ql \f0 \sa180 \li0 \fi0 \b \fs36 Title\par} {\pard \ql \f0 \sa180 \li0 \fi0 This is a paragraph. Hallelujah.\par} {\pard \ql \f0 \sa180 \li0 \fi0 \b \fs32 Here comes a nested list.\par} {\pard \ql \f0 \sa0 \li360 \fi-360 \bullet \tx360\tab List item 1\par} {\pard \ql \f0 \sa0 \li360 \fi-360 \bullet \tx360\tab List item 1.1\par} {\pard \ql \f0 \sa0 \li360 \fi-360 \bullet \tx360\tab List item 1.2\par} {\pard \ql \f0 \sa0 \li360 \fi-360 \bullet \tx360\tab List item 2\par} {\pard \ql \f0 \sa0 \li360 \fi-360 \bullet \tx360\tab List item 2.1\sa180\par} In addition, even if it did show up in my RTF viewer (Mac TextEdit), the RTF code seems to have lost all list nesting. I don't know how to diagnose this, whether I have not stated necessary header information or something in Ruby-Pandoc. Thanks in advance!

    Read the article

  • Writing Content Between Firefox Tabs

    - by GregH
    I am trying to write some values that I extract from a page (via JS/JQuery) opened in a tab in Firefox, to another opened page in a different tab within Firefox. Is this possible? Basically, I am trying to write some values I extract to a Google document that I have open in a different tab. I can see the "document" value in the DOM for my Google Document is something like: Doc?docid=0AQyS4r3XWCQ7ZGZ3dnE2OHNfMTNmcHE2OHAzMg&hl=en Can I just write to that document?

    Read the article

  • modified closure warning in ReSharper

    - by Sarah Vessels
    I was hoping someone could explain to me what bad thing could happen in this code, which causes ReSharper to give an 'Access to modified closure' warning: bool result = true; foreach (string key in keys.TakeWhile(key => result)) { result = result && ContainsKey(key); } return result; Even if the code above seems safe, what bad things could happen in other 'modified closure' instances? I often see this warning as a result of using LINQ queries, and I tend to ignore it because I don't know what could go wrong. ReSharper tries to fix the problem by making a second variable that seems pointless to me, e.g. it changes the foreach line above to: bool result1 = result; foreach (string key in keys.TakeWhile(key => result1)) Update: on a side note, apparently that whole chunk of code can be converted to the following statement, which causes no modified closure warnings: return keys.Aggregate( true, (current, key) => current && ContainsKey(key) );

    Read the article

  • jQuery: Hide/Display tabs (and its corresponding content) with check boxes

    - by Ricardo
    Hello, Well, this must be very simple to do for most of you, but I have no idea how to accomplish this. I have a set of tabs and on top of the tabs is a set of checkboxes ; each checkbox 'corresponds' to a tab. What I need is to be able to activate/deactivate each checkbox and have its corresponding tab (and the tab's content) hide/display. Here's my HTML: <div class="show-results-from"> <ul> <li>See results from:</li> <li> <label> <input type="checkbox" name="a" id="a"> Products &amp; Services <span>(16)</span></label> </li> <li> <label> <input type="checkbox" name="b" id="b"> Publications <span>(9)</span></label> </li> <li> <label> <input type="checkbox" name="c" id="c"> Other <span>(150)</span></label> </li> </ul> </div> <ul class="tabs"> <li><span rel="tabs1" class="defaulttab">Products &amp; Services</span></li> <li><span rel="tabs2">Publications</span></li> <li><span rel="tabs3">Other</span></li> </ul> <div class="tab-content" id="tabs1">content</div> <div class="tab-content" id="tabs2">content</div> <div class="tab-content" id="tabs3">content</div> Any help with this is greatly appreciated.

    Read the article

  • MySQL Multiple Table Join

    - by hitman001
    I have a 3 tables that I'm trying to join and get distinct results. CREATE TABLE `car` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(255) NOT NULL DEFAULT '', PRIMARY KEY (`id`) ) ENGINE=InnoDB mysql> select * from car; +----+-------+ | id | name | +----+-------+ | 1 | acura | +----+-------+ CREATE TABLE `tires` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `tire_desc` varchar(255) DEFAULT NULL, `car_id` int(10) unsigned NOT NULL, PRIMARY KEY (`id`), KEY `new_fk_constraint` (`car_id`), CONSTRAINT `new_fk_constraint` FOREIGN KEY (`car_id`) REFERENCES `car` (`id`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=InnoDB mysql> select * from tires; +----+-------------+--------+ | id | tire_desc | car_id | +----+-------------+--------+ | 1 | front_right | 1 | | 2 | front_left | 1 | +----+-------------+--------+ CREATE TABLE `lights` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `lights_desc` varchar(255) NOT NULL, `car_id` int(10) unsigned NOT NULL, PRIMARY KEY (`id`), KEY `new1_fk_constraint` (`car_id`), CONSTRAINT `new1_fk_constraint` FOREIGN KEY (`car_id`) REFERENCES `car` (`id`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=InnoDB mysql> select * from lights; +----+-------------+--------+ | id | lights_desc | car_id | +----+-------------+--------+ | 1 | right_light | 1 | | 2 | left_light | 1 | +----+-------------+--------+ Here is my query. mysql> SELECT name, group_concat(tire_desc), group_concat(lights_desc) FROM car left join tires on car.id = tires.car_id left join lights on car.id = car_id group by car.id; +-------+-----------------------------------------------+-----------------------------------------------+ | name | group_concat(tire_desc) | group_concat(lights_desc) | +-------+-----------------------------------------------+-----------------------------------------------+ | acura | front_right,front_right,front_left,front_left | right_light,left_light,right_light,left_light | +-------+-----------------------------------------------+-----------------------------------------------+ I get duplicate entires and this is what I would like to get. +-------+-----------------------------------------------+--------------------------------+ | name | group_concat(tire_desc) | group_concat(lights_desc) | +-------+-----------------------------------------------+--------------------------------+ | acura | front_right,front_left | right_light,left_light | +-------+-----------------------------------------------+--------------------------------+ I cannot use distinct in group_concat because I might have legitimate duplicates which I would like to keep. Is there any way to do this query using joins and not using inner selects like the statement below? SELECT name, (select group_concat(tire_desc) from tires where car.id = tires.car_id), (select group_concat(lights_desc) from lights where car.id = lights.car_id) FROM car Also, if I will use inner selects, will there be any performance issues over joins?

    Read the article

  • Need Explanation of couchdb reduce function

    - by Alan
    From http://wiki.apache.org/couchdb/Introduction_to_CouchDB_views The couchdb reduce function is defined as function (key, values, rereduce) { return sum(values); } key will be an array whose elements are arrays of the form [key,id] values will be an array of the values emitted for the respective elements in keys i.e. reduce([ [key1,id1], [key2,id2], [key3,id3] ], [value1,value2,value3], false) I am having trouble understanding when/why the array of keys would contain different key values. If the array of keys does contain different key values, how would I deal with it? As an example, assume that my database contains movements between accounts of the form. {"amount":100, "CreditAccount":"account_number", "DebitAccount":"account_number"} I want a view that gives the balance of an account. My map function does: emit( doc.CreditAccount, doc.amount ) emit( doc.DebitAccount, -doc.amount ) My reduce function does: return sum(values); I seem to get the expected results, however I can't reconcile this with the possibility that my reduce function gets different key values. Is my reduce function supposed to group key values first? What kind of result would I return in that case?

    Read the article

  • memcached: which is faster, doing an add (and checking result), or doing a get (and set when returni

    - by Mike Sherov
    The title of this question isn't so clear, but the code and question is straightforward. Let's say I want to show my users an ad once per day. To accomplish this, every time they visit a page on my site, I check to see if a certain memcache key has any data stored on it. If so, don't show an ad. If not, store the value '1' in that key with an expiration of 86400. I can do this 2 ways: //version a $key='OPD_'.date('Ymd').'_'.$type.'_'.$user; if($memcache->get($key)===false){ $memcache->set($key,'1',false,$expire); //show ad } //version b $key='OPD_'.date('Ymd').'_'.$type.'_'.$user; if($memcache->add($key,'1',false,$expire)){ //show ad } Now, it might seem obvious that b is better, it always makes 1 memcache call. However, what is the overhead of "add" vs. "get"? These aren't the real comparisons... and I just made up these numbers, but let's say 1 add ~= 1 set ~= 5 get in terms of effort, and the average user views 5 pages a day: a: (5 get * 1 effort) + (1 set * 5 effort) = 10 units of effort b: (5 add * 5 effort) = 25 units of effort Would it make sense to always do the add call? Is this an unnecessary micro-optimization?

    Read the article

  • How to create animated sliding windows/tabs menu?

    - by Forte
    I have created navigation menu in YUI 2.8 as below : I have also animated tabs using CSS transitions. CSS transitions are not widely supported by browsers and my animations are not working in Opera, IE etc. Since i'm already using YUI 2.8 on that page, can somebody tell me how do i animate those tabs? When i click on any tab, it should expand in vertical dimension smoothly (animated). Below are the properties of tabs which are going to change when i select any tab (Below properties of tabs should be animated) : Paddings Margins Background-Color Borders Please note in above image : There is little space left on right side in case #1 when 1st tab is selected. In case #2 and case #3 there is space left on left as well as right side. In case #4, there is some space left on left side when last tab is selected.

    Read the article

  • What groupbox method (if any) monitors radio button selections?

    - by JohnK813
    I'm creating a VB.NET application that includes two radio buttons inside of a groupbox. If the first radio button is selected, a certain tab on a tab form should be enabled. If the second radio button is selected, that tab should be disabled. Is there a groupbox method that monitors both radio buttons and fires when the selection changes? Or do I need to set up individual methods for each radio button?

    Read the article

  • Outputting array contents as nested list in PHP

    - by Mamadou
    I have the array $tab[1,2,3,4,5,6,7,8,9,10] and I would like to display it like this: <ul> <li> <a href=""/>FIRST ELEMENT OF THE TAB ==> 1</a> <a href=""/>2ND ELEMENT OF THE TAB ==> 2</a> </li> <li> <a href=""/>3THIRD ELEMENT==> 3</a> <a href=""/>FORTH ELEMENT OF THE TAB ==> 4</a> </li> <li> <a href=""/>FIFTH ELEMENT==> 5</a> <a href=""/>SIXTITH ELEMENT OF THE TAB ==> 6</a> </li> </ul> How can I achieve this in PHP? I am thinking of creating a sub array with array_slice.

    Read the article

  • Add the view controller in the TabBarController from within one of its viewControllers ?

    - by user550001
    Hi, I am new to iPhone/iPad Developer. I am developing a application using UITabBarController. I have created a tabbarcontroller class in which i have implemented a UITabBarController object through NIB file. there is 4 tab as LoginPage, Category Page, About Us Page, Setting Page. I want to add logout tab in tabbarcontroller after login in Login Page by programmatically and when user will click to logout tab then it back to home/login screen and logout tab will eliminate. So i need a code snippet to add TabBar Item in UITabBarController within its View Controller. Thank you in advance

    Read the article

  • comparision of strings

    - by EmiLazE
    i am writing a program, that simulates game mastermind. but i am struggling on how to compare guessed pattern to key pattern. the game conditions are a little bit changed: patterns consist of letters. if an element of guessed pattern is equal to element of key pattern, and also index is equal, then print b. if an element of guessed pattern is equal to element of key pattern, but index is not, then print w. if an element of guessed pattern is not equal to element of key pattern, print dot. in feedback about guessed pattern, 'b's come first, 'w's second, '.'s last. my problem is that i cannot think of a way totally satisfies the answer. for (i=0; i<patternlength; i++) { for (x=0; x<patternlength; x++) { if (guess[i]==key[x] && i==x) printf("b"); if (guess[i]==key[x] && i!=x) printf("w"); if (guess[i]!=key[x]) printf("."); } }

    Read the article

  • Dotfuscator Deep Dive with WP7

    - by Bil Simser
    I thought I would share some experience with code obfuscation (specifically the Dotfuscator product) and Windows Phone 7 apps. These days twitter is a buzz with black hat and white operations coming out about how the marketplace is insecure and Microsoft failed, blah, blah, blah. So it’s that much more important to protect your intellectual property. You should protect it no matter what when releasing apps into the wild but more so when someone is paying for them. You want to protect the time and effort that went into your code and have some comfort that the casual hacker isn’t going to usurp your next best thing. Enter code obfuscation. Code obfuscation is one tool that can help protect your IP. Basically it goes into your compiled assemblies, rewrites things at an IL level (like renaming methods and classes and hiding logic flow) and rewrites it back so that the assembly or executable is still fully functional but prying eyes using a tool like ILDASM or Reflector can’t see what’s going on.  You can read more about code obfuscation here on Wikipedia. A word to the wise. Code obfuscation isn’t 100% secure. More so on the WP7 platform where the OS expects certain things to be as they were meant to be. So don’t expect 100% obfuscation of every class and every method and every property. It’s just not going to happen. What this does do is give you some level of protection but don’t put all your eggs in one basket and call it done. Like I said, this is just one step in the process. There are a few tools out there that provide code obfuscation and support the Windows Phone 7 platform (see links to other tools at the end of this post). One such tool is Dotfuscator from PreEmptive solutions. The thing about Dotfuscator is that they’ve struck a deal with Microsoft to provide a *free* copy of their commercial product for Windows Phone 7. The only drawback is that it only runs until March 31, 2010. However it’s a good place to start and the focus of this article. Getting Started When you fire up Dotfuscator you’re presented with a dialog to start a new project or load a previous one. We’ll start with a new project. You’re then looking at a somewhat blank screen that shows an Input tab (among others) and you’re probably wondering what to do? Click on the folder icon (first one) and browse to where your xap file is. At this point you can save the project and click on the arrow to start the process. Bam! You’re done. Right? Think again. The program did indeed run and create a new version of your xap (doing it’s thing and rewriting back your *obfuscated* assemblies) but let’s take a look at the assembly in Reflector to see the end result. Remember a xap file is really just a glorified zip file (or cab file if you prefer). When you ran Dotfuscator for the first time with the default settings you’ll see it created a new version of your xap in a folder under “My Documents” called “Dotfuscated” (you can configure the output directory in settings). Here’s the new xap file. Since a xap is just a zip, rename it to .cab or .zip or something and open it with your favorite unarchive program (I use WinRar but it doesn’t matter as long as it can unzip files). If you already have the xap file associated with your unarchive tool the rename isn’t needed. Once renamed extract the contents of the xap to your hard drive: Now you’ll have a folder with the contents of the xap file extracted: Double click or load up your assembly (WindowsPhoneDataBoundApplication1.dll in the example) in Reflector and let’s see the results: Hmm. That doesn’t look right. I can see all the methods and the code is all there for my LoadData method I wanted to protect. Product failure. Let’s return it for a refund. Hold your horses. We need to check out the settings in the program first. Remember when we loaded up our xap file. It started us on the Input tab but there was a settings tab before that. Wonder what it does? Here’s the default settings: Renaming Taking a closer look, all of the settings in Feature are disabled. WTF? Yeah, it leaves me scratching my head why an obfuscator by default doesn’t obfuscate. However it’s a simple fix to change these settings. Let’s enable Renaming as it sounds like a good start. Renaming obscures code by renaming methods and fields to names that are not understandable. Great. Run the tool again and go through the process of unzipping the updated xap and let’s take a look in Reflector again at our project. This looks a lot better. Lots of methods named a, b, c, d, etc. That’ll help slow hackers down a bit. What about our logic that we spent days weeks on? Let’s take a look at the LoadData method: What gives? We have renaming enabled but all of our code is still there. If you look through all your methods you’ll find it’s still sitting there out in the open. Control Flow Back to the settings page again. Let’s enable Control Flow now. Control Flow obfuscation synthesizes branching, conditional, and iterative constructs (such as if, for, and while) that produce valid executable logic, but yield non-deterministic semantic results when decompilation is attempted. In other words, the code runs as before, but decompilers cannot reproduce the original code. Do the dance again and let’s see the results in Reflector. Ahh, that’s better. Methods renamed *and* nobody can look at our LoadData method now. Life is good. More than Minimum This is the bare minimum to obfuscate your xap to at least a somewhat comfortable level. However I did find that while this worked in my Hello World demo, it didn’t work on one of my real world apps. I had to do some extra tweaking with that. Below are the screens that I used on one app that worked. I’m not sure what it was about the app that the approach above didn’t work with (maybe the extra assembly?) but it works and I’m happy with it. YMMV. Remember to test your obfuscated app on your device first before submitting to ensure you haven’t obfuscated the obfuscator. settings tab: rename tab: string encryption tab: premark tab: A few final notes Play with the settings and keep bumping up the bar to try to get as much obfuscation as you can. The more the better but remember you can overdo it. Always (always, always, always) deploy your obfuscated xap to your device and test it before submitting to the marketplace. I didn’t and got rejected because I had gone overboard with the obfuscation so the app wouldn’t launch at all. Not everything is going to be obfuscated. Specifically I don’t see a way to obfuscate auto properties and a few other language features. Again, if you crank the settings up you might hide these but I haven’t spent a lot of time optimizing the process. Some people might say to obfuscate your xaml using string encryption but again, test, test, test. Xaml is picky so too much obfuscation (or any) might disable your app or produce odd rendering effets. Remember, obfuscation is not 100% secure! Don’t rely on it as a sole way of protecting your assets. Other Tools Dotfuscator is one just product and isn’t the end-all be-all to obfuscation so check out others below. For example, Crypto can make it so Reflector doesn’t even recognize the app as a .NET one and won’t open it. Others can encrypt resources and Xaml markup files. Here are some other obfuscators that support the Windows Phone 7 platform. Feel free to give them a try and let people know your experience with them! Dotfuscator Windows Phone Edition Crypto Obfuscator for .NET DeepSea Obfuscation

    Read the article

  • SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress

    - by pinaldave
    One of the most common data integration tasks I run into is a desire to move data from a file into a database table.  Generally the user is familiar with his data, the structure of the file, and the database table, but is unfamiliar with data integration tools and therefore views this task as something that is difficult.  What these users really need is a point and click approach that minimizes the learning curve for the data integration tool.  This is what CSVexpress (www.CSVexpress.com) is all about!  It is based on expressor Studio, a data integration tool I’ve been reviewing over the last several months. With CSVexpress, moving data between data sources can be as simple as providing the database connection details, describing the structure of the incoming and outgoing data and then connecting two pre-programmed operators.   There’s no need to learn the intricacies of the data integration tool or to write code.  Let’s look at an example. Suppose I have a comma separated value data file with data similar to the following, which is a listing of terminated employees that includes their hiring and termination date, department, job description, and final salary. EMP_ID,STRT_DATE,END_DATE,JOB_ID,DEPT_ID,SALARY 102,13-JAN-93,24-JUL-98 17:00,Programmer,60,"$85,000" 101,21-SEP-89,27-OCT-93 17:00,Account Representative,110,"$65,000" 103,28-OCT-93,15-MAR-97 17:00,Account Manager,110,"$75,000" 304,17-FEB-96,19-DEC-99 17:00,Marketing,20,"$45,000" 333,24-MAR-98,31-DEC-99 17:00,Data Entry Clerk,50,"$35,000" 100,17-SEP-87,17-JUN-93 17:00,Administrative Assistant,90,"$40,000" 334,24-MAR-98,31-DEC-98 17:00,Sales Representative,80,"$40,000" 400,01-JAN-99,31-DEC-99 17:00,Sales Manager,80,"$55,000" Notice the concise format used for the date values, the fact that the termination date includes both date and time information, and that the salary is clearly identified as money by the dollar sign and digit grouping.  In moving this data to a database table I want to express the dates using a format that includes the century since it’s obvious that this listing could include employees who left the company in both the 20th and 21st centuries, and I want the salary to be stored as a decimal value without the currency symbol and grouping character.  Most data integration tools would require coding within a transformation operation to effect these changes, but not expressor Studio.  Directives for these modifications are included in the description of the incoming data. Besides starting the expressor Studio tool and opening a project, the first step is to create connection artifacts, which describe to expressor where data is stored.  For this example, two connection artifacts are required: a file connection, which encapsulates the file system location of my file; and a database connection, which encapsulates the database connection information.  With expressor Studio, I use wizards to create these artifacts. First click New Connection > File Connection in the Home tab of expressor Studio’s ribbon bar, which starts the File Connection wizard.  In the first window, I enter the path to the directory that contains the input file.  Note that the file connection artifact only specifies the file system location, not the name of the file. Then I click Next and enter a meaningful name for this connection artifact; clicking Finish closes the wizard and saves the artifact. To create the Database Connection artifact, I must know the location of, or instance name, of the target database and have the credentials of an account with sufficient privileges to write to the target table.  To use expressor Studio’s features to the fullest, this account should also have the authority to create a table. I click the New Connection > Database Connection in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  expressor Studio includes high-performance drivers for many relational database management systems, so I can simply make a selection from the “Supplied database drivers” drop down control.  If my desired RDBMS isn’t listed, I can optionally use an existing ODBC DSN by selecting the “Existing DSN” radio button. In the following window, I enter the connection details.  With Microsoft SQL Server, I may choose to use Windows Authentication rather than rather than account credentials.  After clicking Next, I enter a meaningful name for this connection artifact and clicking Finish closes the wizard and saves the artifact. Now I create a schema artifact, which describes the structure of the file data.  When expressor reads a file, all data fields are typed as strings.  In some use cases this may be exactly what is needed and there is no need to edit the schema artifact.  But in this example, editing the schema artifact will be used to specify how the data should be transformed; that is, reformat the dates to include century designations, change the employee and job ID’s to integers, and convert the salary to a decimal value. Again a wizard is used to create the schema artifact.  I click New Schema > Delimited Schema in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  In the first window, I click Get Data from File, which then displays a listing of the file connections in the project.  When I click on the file connection I previously created, a browse window opens to this file system location; I then select the file and click Open, which imports 10 lines from the file into the wizard. I now view the file’s content and confirm that the appropriate delimiter characters are selected in the “Field Delimiter” and “Record Delimiter” drop down controls; then I click Next. Since the input file includes a header row, I can easily indicate that fields in the file should be identified through the corresponding header value by clicking “Set All Names from Selected Row. “ Alternatively, I could enter a different identifier into the Field Details > Name text box.  I click Next and enter a meaningful name for this schema artifact; clicking Finish closes the wizard and saves the artifact. Now I open the schema artifact in the schema editor.  When I first view the schema’s content, I note that the types of all attributes in the Semantic Type (the right-hand panel) are strings and that the attribute names are the same as the field names in the data file.  To change an attribute’s name and type, I highlight the attribute and click Edit in the Attributes grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Attribute window; I can change the attribute name and select the desired type from the “Data type” drop down control.  In this example, I change the name of each attribute to the name of the corresponding database table column (EmployeeID, StartingDate, TerminationDate, JobDescription, DepartmentID, and FinalSalary).  Then for the EmployeeID and DepartmentID attributes, I select Integer as the data type, for the StartingDate and TerminationDate attributes, I select Datetime as the data type, and for the FinalSalary attribute, I select the Decimal type. But I can do much more in the schema editor.  For the datetime attributes, I can set a constraint that ensures that the data adheres to some predetermined specifications; a starting date must be later than January 1, 1980 (the date on which the company began operations) and a termination date must be earlier than 11:59 PM on December 31, 1999.  I simply select the appropriate constraint and enter the value (1980-01-01 00:00 as the starting date and 1999-12-31 11:59 as the termination date). As a last step in setting up these datetime conversions, I edit the mapping, describing the format of each datetime type in the source file. I highlight the mapping line for the StartingDate attribute and click Edit Mapping in the Mappings grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Mapping window in which I either enter, or select, a format that describes how the datetime values are represented in the file.  Note the use of Y01 as the syntax for the year.  This syntax is the indicator to expressor Studio to derive the century by setting any year later than 01 to the 20th century and any year before 01 to the 21st century.  As each datetime value is read from the file, the year values are transformed into century and year values. For the TerminationDate attribute, my format also indicates that the datetime value includes hours and minutes. And now to the Salary attribute. I open its mapping and in the Edit Mapping window select the Currency tab and the “Use currency” check box.  This indicates that the file data will include the dollar sign (or in Europe the Pound or Euro sign), which should be removed. And on the Grouping tab, I select the “Use grouping” checkbox and enter 3 into the “Group size” text box, a comma into the “Grouping character” text box, and a decimal point into the “Decimal separator” character text box. These entries allow the string to be properly converted into a decimal value. By making these entries into the schema that describes my input file, I’ve specified how I want the data transformed prior to writing to the database table and completely removed the requirement for coding within the data integration application itself. Assembling the data integration application is simple.  Onto the canvas I drag the Read File and Write Table operators, connecting the output of the Read File operator to the input of the Write Table operator. Next, I select the Read File operator and its Properties panel opens on the right-hand side of expressor Studio.  For each property, I can select an appropriate entry from the corresponding drop down control.  Clicking on the button to the right of the “File name” text box opens the file system location specified in the file connection artifact, allowing me to select the appropriate input file.  I indicate also that the first row in the file, the header row, should be skipped, and that any record that fails one of the datetime constraints should be skipped. I then select the Write Table operator and in its Properties panel specify the database connection, normal for the “Mode,” and the “Truncate” and “Create Missing Table” options.  If my target table does not yet exist, expressor will create the table using the information encapsulated in the schema artifact assigned to the operator. The last task needed to complete the application is to create the schema artifact used by the Write Table operator.  This is extremely easy as another wizard is capable of using the schema artifact assigned to the Read Table operator to create a schema artifact for the Write Table operator.  In the Write Table Properties panel, I click the drop down control to the right of the “Schema” property and select “New Table Schema from Upstream Output…” from the drop down menu. The wizard first displays the table description and in its second screen asks me to select the database connection artifact that specifies the RDBMS in which the target table will exist.  The wizard then connects to the RDBMS and retrieves a list of database schemas from which I make a selection.  The fourth screen gives me the opportunity to fine tune the table’s description.  In this example, I set the width of the JobDescription column to a maximum of 40 characters and select money as the type of the LastSalary column.  I also provide the name for the table. This completes development of the application.  The entire application was created through the use of wizards and the required data transformations specified through simple constraints and specifications rather than through coding.  To develop this application, I only needed a basic understanding of expressor Studio, a level of expertise that can be gained by working through a few introductory tutorials.  expressor Studio is as close to a point and click data integration tool as one could want and I urge you to try this product if you have a need to move data between files or from files to database tables. Check out CSVexpress in more detail.  It offers a few basic video tutorials and a preview of expressor Studio 3.5, which will support the reading and writing of data into Salesforce.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • OpenGL basics: calling glDrawElements once per object

    - by Bethor
    Hi all, continuing on from my explorations of the basics of OpenGL (see this question), I'm trying to figure out the basic principles of drawing a scene with OpenGL. I am trying to render a simple cube repeated n times in every direction. My method appears to yield terrible performance : 1000 cubes brings performance below 50fps (on a QuadroFX 1800, roughly a GeForce 9600GT). My method for drawing these cubes is as follows: done once: set up a vertex buffer and array buffer containing my cube vertices in model space set up an array buffer indexing the cube for drawing as 12 triangles done for each frame: update uniform values used by the vertex shader to move all cubes at once done for each cube, for each frame: update uniform values used by the vertex shader to move each cube to its position call glDrawElements to draw the positioned cube Is this a sane method ? If not, how does one go about something like this ? I'm guessing I need to minimize calls to glUniform, glDrawElements, or both, but I'm not sure how to do that. Full code for my little test : (depends on gletools and pyglet) I'm aware that my init code (at least) is really ugly; I'm concerned with the rendering code for each frame right now, I'll move to something a little less insane for the creation of the vertex buffers and such later on. import pyglet from pyglet.gl import * from pyglet.window import key from numpy import deg2rad, tan from gletools import ShaderProgram, FragmentShader, VertexShader, GeometryShader vertexData = [-0.5, -0.5, -0.5, 1.0, -0.5, 0.5, -0.5, 1.0, 0.5, -0.5, -0.5, 1.0, 0.5, 0.5, -0.5, 1.0, -0.5, -0.5, 0.5, 1.0, -0.5, 0.5, 0.5, 1.0, 0.5, -0.5, 0.5, 1.0, 0.5, 0.5, 0.5, 1.0] elementArray = [2, 1, 0, 1, 2, 3,## back face 4, 7, 6, 4, 5, 7,## front face 1, 3, 5, 3, 7, 5,## top face 2, 0, 4, 2, 4, 6,## bottom face 1, 5, 4, 0, 1, 4,## left face 6, 7, 3, 6, 3, 2]## right face def toGLArray(input): return (GLfloat*len(input))(*input) def toGLushortArray(input): return (GLushort*len(input))(*input) def initPerspectiveMatrix(aspectRatio = 1.0, fov = 45): frustumScale = 1.0 / tan(deg2rad(fov) / 2.0) fzNear = 0.5 fzFar = 300.0 perspectiveMatrix = [frustumScale*aspectRatio, 0.0 , 0.0 , 0.0 , 0.0 , frustumScale, 0.0 , 0.0 , 0.0 , 0.0 , (fzFar+fzNear)/(fzNear-fzFar) , -1.0, 0.0 , 0.0 , (2*fzFar*fzNear)/(fzNear-fzFar), 0.0 ] return perspectiveMatrix class ModelObject(object): vbo = GLuint() vao = GLuint() eao = GLuint() initDone = False verticesPool = [] indexPool = [] def __init__(self, vertices, indexing): super(ModelObject, self).__init__() if not ModelObject.initDone: glGenVertexArrays(1, ModelObject.vao) glGenBuffers(1, ModelObject.vbo) glGenBuffers(1, ModelObject.eao) glBindVertexArray(ModelObject.vao) initDone = True self.numIndices = len(indexing) self.offsetIntoVerticesPool = len(ModelObject.verticesPool) ModelObject.verticesPool.extend(vertices) self.offsetIntoElementArray = len(ModelObject.indexPool) ModelObject.indexPool.extend(indexing) glBindBuffer(GL_ARRAY_BUFFER, ModelObject.vbo) glEnableVertexAttribArray(0) #position glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0) glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ModelObject.eao) glBufferData(GL_ARRAY_BUFFER, len(ModelObject.verticesPool)*4, toGLArray(ModelObject.verticesPool), GL_STREAM_DRAW) glBufferData(GL_ELEMENT_ARRAY_BUFFER, len(ModelObject.indexPool)*2, toGLushortArray(ModelObject.indexPool), GL_STREAM_DRAW) def draw(self): glDrawElements(GL_TRIANGLES, self.numIndices, GL_UNSIGNED_SHORT, self.offsetIntoElementArray) class PositionedObject(object): def __init__(self, mesh, pos, objOffsetUf): super(PositionedObject, self).__init__() self.mesh = mesh self.pos = pos self.objOffsetUf = objOffsetUf def draw(self): glUniform3f(self.objOffsetUf, self.pos[0], self.pos[1], self.pos[2]) self.mesh.draw() w = 800 h = 600 AR = float(h)/float(w) window = pyglet.window.Window(width=w, height=h, vsync=False) window.set_exclusive_mouse(True) pyglet.clock.set_fps_limit(None) ## input forward = [False] left = [False] back = [False] right = [False] up = [False] down = [False] inputs = {key.Z: forward, key.Q: left, key.S: back, key.D: right, key.UP: forward, key.LEFT: left, key.DOWN: back, key.RIGHT: right, key.PAGEUP: up, key.PAGEDOWN: down} ## camera camX = 0.0 camY = 0.0 camZ = -1.0 def simulate(delta): global camZ, camX, camY scale = 10.0 move = scale*delta if forward[0]: camZ += move if back[0]: camZ += -move if left[0]: camX += move if right[0]: camX += -move if up[0]: camY += move if down[0]: camY += -move pyglet.clock.schedule(simulate) @window.event def on_key_press(symbol, modifiers): global forward, back, left, right, up, down if symbol in inputs.keys(): inputs[symbol][0] = True @window.event def on_key_release(symbol, modifiers): global forward, back, left, right, up, down if symbol in inputs.keys(): inputs[symbol][0] = False ## uniforms for shaders camOffsetUf = GLuint() objOffsetUf = GLuint() perspectiveMatrixUf = GLuint() camRotationUf = GLuint() program = ShaderProgram( VertexShader(''' #version 330 layout(location = 0) in vec4 objCoord; uniform vec3 objOffset; uniform vec3 cameraOffset; uniform mat4 perspMx; void main() { mat4 translateCamera = mat4(1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, cameraOffset.x, cameraOffset.y, cameraOffset.z, 1.0f); mat4 translateObject = mat4(1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, objOffset.x, objOffset.y, objOffset.z, 1.0f); vec4 modelCoord = objCoord; vec4 positionedModel = translateObject*modelCoord; vec4 cameraPos = translateCamera*positionedModel; gl_Position = perspMx * cameraPos; }'''), FragmentShader(''' #version 330 out vec4 outputColor; const vec4 fillColor = vec4(1.0f, 1.0f, 1.0f, 1.0f); void main() { outputColor = fillColor; }''') ) shapes = [] def init(): global camOffsetUf, objOffsetUf with program: camOffsetUf = glGetUniformLocation(program.id, "cameraOffset") objOffsetUf = glGetUniformLocation(program.id, "objOffset") perspectiveMatrixUf = glGetUniformLocation(program.id, "perspMx") glUniformMatrix4fv(perspectiveMatrixUf, 1, GL_FALSE, toGLArray(initPerspectiveMatrix(AR))) obj = ModelObject(vertexData, elementArray) nb = 20 for i in range(nb): for j in range(nb): for k in range(nb): shapes.append(PositionedObject(obj, (float(i*2), float(j*2), float(k*2)), objOffsetUf)) glEnable(GL_CULL_FACE) glCullFace(GL_BACK) glFrontFace(GL_CW) glEnable(GL_DEPTH_TEST) glDepthMask(GL_TRUE) glDepthFunc(GL_LEQUAL) glDepthRange(0.0, 1.0) glClearDepth(1.0) def update(dt): print pyglet.clock.get_fps() pyglet.clock.schedule_interval(update, 1.0) @window.event def on_draw(): with program: pyglet.clock.tick() glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT) glUniform3f(camOffsetUf, camX, camY, camZ) for shape in shapes: shape.draw() init() pyglet.app.run()

    Read the article

  • How do I add "Press any key to boot from usb" when installing Windows from a flash drive? (Grub4dos question / how to remove a bootloader)

    - by Vincent
    Hi there! I've been struggling with this problem for a while now and finially decided to ask for help. Let me first explain what the main purpose of the app is: to provide the a very easy to use way of backing up files, after which I format the drive and start Windows 7 setup. I do this by booting WinPE, which runs a script to detect Windows installations and then opens a file browser. After the file browser is closed, the script continues and formats the drive that contains the Windows installation, and starts an unattended Windows 7 install. Now here is the problem: When you start Windows setup or WinPE from a dvd, you get a nice option to "Press any key to boot from DVD". This is to prevent the computer from booting the DVD when the first phase of the installation is complete and the computer reboots. However, when booting from a flash drive, Windows does not provide this option: it simply boots the flash drive every reboot. To replicate the "press any key" function, I installed Grub4Dos, which works great. It provides a small menu, the first standard item being "Continue installation", the second being "start installation". After quite a lot of tweaking, I got everything working: Start installation starts WinPE, which in turn starts the Windows installation. At first reboot, the Grub4Dos menu comes up, counts 5 seconds and boots the second stage of the installation. Here, I am greeted with the error: "Windows setup could not configure windows to run on this computer's hardware." When I boot into WinPE the normal way (put the bootmgr on the stick root) and change my bios to boot from the primary hdd after first reboot, I don't get this error. I've been looking around, and the only thing I could find was that the BIOS automatically names the boot device hd0, and that Windows can only be run / installed to hd 0. I'm not sure if this is the problem. I read about remapping to solve this problem, but to do that you have to know the phisical location of the hard drive and partition, like hd(0,1). I want this flash drive to work on any PC, regardless of where the OS is installed, so that's not really a possibility. A possible fix I thought of is removing the bootloader from the flash drive when I'm in WinPE. That way, when the pc reboots the BIOS will not see the flash drive as a boot drive and instead boot the primary hdd. I have yet to find a way to do this. Thank you for reading my question, and if you have any suggestion, please do.

    Read the article

< Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >