Search Results

Search found 15985 results on 640 pages for 'debug print'.

Page 470/640 | < Previous Page | 466 467 468 469 470 471 472 473 474 475 476 477  | Next Page >

  • Programmatically updating one update panel elements from another update panel elements

    - by Jalpesh P. Vadgama
    While taking interviews for asp.net candidate I am often asking this question but most peoples are not able to give this answer. So I decided to write a blog post about this. Here is the scenario. There are two update panels in my html code in first update panel there is textbox hello world and another update panel there is a button called btnHelloWorld. Now I want to update textbox text in button click event without post back. But in normal scenario It will not update the textbox text as both are in different update panel. Here is the code for that. <form id="form1" runat="server"> <asp:ScriptManager ID="myScriptManager" runat="server" EnableCdn="true"></asp:ScriptManager> <asp:UpdatePanel ID="firstUpdatePanel" runat="server" UpdateMode="Conditional"> <ContentTemplate> <asp:TextBox ID="txtHelloWorld" runat="server"></asp:TextBox> </ContentTemplate> </asp:UpdatePanel> <asp:UpdatePanel ID="secondUpdatePanel" runat="server" UpdateMode="Conditional"> <ContentTemplate> <asp:Button ID="btnHelloWorld" runat="server" Text="Print Hello World" onclick="btnHelloWorld_Click" /> </ContentTemplate> </asp:UpdatePanel> </form> Here comes magic!!. Lots of people don’t know that update panel are providing the Update method from which we can programmatically update the update panel elements without post back. Below is code for that. protected void btnHelloWorld_Click(object sender, System.EventArgs e) { txtHelloWorld.Text = "Hello World!!!"; firstUpdatePanel.Update(); } That’s it here I have updated the firstUpdatePanel from the code!!!. Hope you liked it.. Stay tuned for more..Happy Programming.. Technorati Tags: UpdatePanel,ASP.NET

    Read the article

  • What's the best way to create a static utility class in python? Is using metaclasses code smell?

    - by rsimp
    Ok so I need to create a bunch of utility classes in python. Normally I would just use a simple module for this but I need to be able to inherit in order to share common code between them. The common code needs to reference the state of the module using it so simple imports wouldn't work well. I don't like singletons, and classes that use the classmethod decorator do not have proper support for python properties. One pattern I see used a lot is creating an internal python class prefixed with an underscore and creating a single instance which is then explicitly imported or set as the module itself. This is also used by fabric to create a common environment object (fabric.api.env). I've realized another way to accomplish this would be with metaclasses. For example: #util.py class MetaFooBase(type): @property def file_path(cls): raise NotImplementedError def inherited_method(cls): print cls.file_path #foo.py from util import * import env class MetaFoo(MetaFooBase): @property def file_path(cls): return env.base_path + "relative/path" def another_class_method(cls): pass class Foo(object): __metaclass__ = MetaFoo #client.py from foo import Foo file_path = Foo.file_path I like this approach better than the first pattern for a few reasons: First, instantiating Foo would be meaningless as it has no attributes or methods, which insures this class acts like a true single interface utility, unlike the first pattern which relies on the underscore convention to dissuade client code from creating more instances of the internal class. Second, sub-classing MetaFoo in a different module wouldn't be as awkward because I wouldn't be importing a class with an underscore which is inherently going against its private naming convention. Third, this seems to be the closest approximation to a static class that exists in python, as all the meta code applies only to the class and not to its instances. This is shown by the common convention of using cls instead of self in the class methods. As well, the base class inherits from type instead of object which would prevent users from trying to use it as a base for other non-static classes. It's implementation as a static class is also apparent when using it by the naming convention Foo, as opposed to foo, which denotes a static class method is being used. As much as I think this is a good fit, I feel that others might feel its not pythonic because its not a sanctioned use for metaclasses which should be avoided 99% of the time. I also find most python devs tend to shy away from metaclasses which might affect code reuse/maintainability. Is this code considered code smell in the python community? I ask because I'm creating a pypi package, and would like to do everything I can to increase adoption.

    Read the article

  • Smooth animation when using fixed time step

    - by sythical
    I'm trying to implement the game loop where the physics is independent from rendering but my animation isn't as smooth as I would like it to be and it seems to periodically jump. Here is my code: // alpha is used for interpolation double alpha = 0, counter_old_time = 0; double accumulator = 0, delta_time = 0, current_time = 0, previous_time = 0; unsigned frame_counter = 0, current_fps = 0; const unsigned physics_rate = 40, max_step_count = 5; const double step_duration = 1.0 / 40.0, accumulator_max = step_duration * 5; // information about the circ;e (position and velocity) int old_pos_x = 100, new_pos_x = 100, render_pos_x = 100, velocity_x = 60; previous_time = al_get_time(); while(true) { current_time = al_get_time(); delta_time = current_time - previous_time; previous_time = current_time; accumulator += delta_time; if(accumulator > accumulator_max) { accumulator = accumulator_max; } while(accumulator >= step_duration) { if(new_pos_x > 1330) velocity_x = -15; else if(new_pos_x < 70) velocity_x = 15; old_pos_x = new_pos_x; new_pos_x += velocity_x; accumulator -= step_duration; } alpha = accumulator / static_cast<double>(step_duration); render_pos_x = old_pos_x + (new_pos_x - old_pos_x) * alpha; al_clear_to_color(al_map_rgb(20, 20, 40)); // clears the screen al_draw_textf(font, al_map_rgb(255, 255, 255), 20, 20, 0, "current_fps: %i", current_fps); // print fps al_draw_filled_circle(render_pos_x, 400, 15, al_map_rgb(255, 255, 255)); // draw circle // I've added this to test how the program will behave when rendering takes // considerably longer than updating the game. al_rest(0.008); al_flip_display(); // swaps the buffers frame_counter++; if(al_get_time() - counter_old_time >= 1) { current_fps = frame_counter; frame_counter = 0; counter_old_time = al_get_time(); } } I have added a pause during the rendering part because I wanted to see how the code would behave when a lot of rendering is involved. Removing it makes the animation smooth but then I'll have to make sure that I don't let the frame rate drop too much and that doesn't seem like a good solution. I've been trying to fix this for a week and have had no luck so I'd be very grateful if someone can read through my code. Thank you! Edit: I added the following code to work out the actual velocity (pixels per second) of the ball each time the ball is rendered and surprisingly it's not constant so I'm guessing that's the issue. I'm not sure why it's not constant. alpha = accumulator / static_cast<double>(step_duration); render_pos_x = old_pos_x + (new_pos_x - old_pos_x) * alpha; cout << (render_pos_x - old_render_pos) / delta_time << endl; old_render_pos = render_pos_x;

    Read the article

  • NRF Big Show 2011 -- Part 2

    - by David Dorf
    One of the things I love about attending NRF is visiting the smaller booths to see what new innovative ideas have sprung up. After all, by watching emerging technologies we can get a sense of how the retail experience might change. After NRF I'm hoping to write a post on what I found, if anything, so be sure to check back. At the Oracle Retail booth we'll be demonstrating some of the aspects of the changing retail experience. These demos use a mix of GA and experimental components. Here are some highlights: 1. Checkin We wrote a consumer iPhone app we call Store Gateway that lets consumers access information from the store. They'll start by doing a checkin when they arrive that will alert the store manager via another iPhone app we wrote called Mobile Manager. Additionally, we display a welcome messaging using Starmount's digital sign. 2. Receive Offers There are three interaction points where a store can easily make an offer to a consumer: checkin, product scans, and checkout. For this demo we're calling our Universal Offer Engine at checkin to determine the best offer for this particular consumer. This offer is then displayed on the consumer's phone as well as on the digital sign. 3. Scan Products To thwart consumers from scanning product barcodes, we used Store Inventory Management to print QRCodes on shelf label then provided access to a scanner in the Store Gateway iphone app. When the consumer scans the shelf label they are shown product information provided by the retailer. 4. Checkout While we don't have a NFC-enabled mobile phone, we have a NFC chip that can attach to a phone. We're using this to checkout using a reader provided by ViVOTech. Tap the phone on the reader, and the POS accesses the customer#, coupons, and payment information. This really speeds the checkout process. 5. Digital Receipt After the transaction is complete, a digital copy of the receipt is sent to Intuit's QuickReceipts where consumers to store all their digital receipts. There's even an iPhone app that provides easy access to the receipts. This covers about half of what what we'll be showing, so be sure to stop by. I'll also be talking about how mobile is impacting the retail experience at the Wednesday morning session NRF Mobile Retail Initiative: a Blueprint for Action. See you at the Big Show!

    Read the article

  • Desktop Fun: Happy New Year Wallpaper Collection [Bonus Edition]

    - by Asian Angel
    As this year draws to a close, it is a time to reflect back on what we have done this year and to look forward to the new one. To help commemorate the event we have put together a bonus size edition of Happy New Year wallpapers for your desktops. Extra Note: We made a special effort to find wallpapers for this collection without the year “printed” on them, thus allowing for reuse as desired and/or needed beyond the 2010 – 2011 holiday. Note: Click on the picture to see the full-size image—these wallpapers vary in size so you may need to crop, stretch, or place them on a colored background in order to best match them to your screen’s resolution. For more New Year’s desktop goodness be sure to check out our Happy New Year icon & font packs collection (link at bottom)! Note: This wallpaper will need to be placed on a larger white background in order to increase the height. Note: This wallpaper will need to be placed on a larger background in order to increase the width and height. Note: This wallpaper comes in multiple sizes and will need to be downloaded as a zip file. Note: This wallpaper comes in multiple sizes and will need to be downloaded as a zip file. Note: The download size for the original version of this wallpaper is 15 MB. Note: The download size for the original version of this wallpaper is 15 MB. More Happy New Year Fun Desktop Fun: Happy New Year Icon and Font Packs For more wallpapers be certain to see our great collections in the Desktop Fun section. Latest Features How-To Geek ETC How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Is Your Desktop Printer More Expensive Than Printing Services? 20 OS X Keyboard Shortcuts You Might Not Know HTG Explains: Which Linux File System Should You Choose? HTG Explains: Why Does Photo Paper Improve Print Quality? The Outdoor Lights Scene from National Lampoon’s Christmas Vacation [Video] The Famous Home Alone Pizza Delivery Scene [Classic Video] Chronicles of Narnia: The Voyage of the Dawn Treader Theme for Windows 7 Cardinal and Rabbit Sharing a Tree on a Cold Winter Morning Wallpaper An Alternate Star Wars Christmas Special [Video] Sunset in a Tropical Paradise Wallpaper

    Read the article

  • The value of money

    - by ambreesh
    A dictionary definition of money is "any circulating medium of exchange, including coins, paper money, anddemand deposits". If you ask an economist for a definition of money, you will be introduced to terms like M1, M2, M3, all of which denote tangible assets - currency, and anything that is liquid enough to be used as currency; checks, stamps and now mobile minutes being examples. The macroeconomic theory of money is fascinating - the effect of money supply on exchange rates and interest rates, the concept of the "money multiplier" (if I deposit $10 into a bank, the bank will likely loan $8 of it to someone else, who will then give it to someone else in exchange for goods and services, who will then likely deposit it again, which will result in the bank loaning it again and so on - making that $10 of money supply worth a lot more ($10+$8+$x+...)).  But all this depends on money supply - in other words, money that is printed by the mint. The Treasury Department spends a lot of time figuring out how much money to print, there is lot being written on QE2 now-a-days, which is intended to increase the money supply. Money is used to purchase goods and services, and yes it is saved too but that is so one can purchase goods and services later. Completely unrelated, there is a sea change occurring in the web world, dominated by, I believe, Facebook. With 500M active users and growing, FB has the ability to introduce a "money supply" which is completely unrelated to today's "money". Using today's money, a FB user can buy a certain number of FB$s, and then use the FB$s within FB to purchase goods and services - with the money multiplier kicking in. I remember talking with a colleague about this a few years ago, the true way to monetize the web is to introduce an alternative system to the existing, and FB has the ability to do just that. There is enough momentum, enough mass for FB to start to monetize its user base. And completely screw up the economists at the Treasury, not to mention disintermediating the banks completely. The only other ubiquitous asset is mobile minutes. People exchanging mobile minutes for tangible goods and services happens today, the big difference however is the demographic. While Safaricom offers this ability in Kenya today, FB has the 15-40 year middle class user as their user. And the next generation is growing up with FB as a standard channel for communicating with their peers. Virtual flowers when going in for the kill? If your target is an avid FB user, why not? It certainly is a lot more green - no pun intended!

    Read the article

  • ASP.NET Membership Password Hash -- .NET 3.5 to .NET 4 Upgrade Surprise!

    - by David Hoerster
    I'm in the process of evaluating how my team will upgrade our product from .NET 3.5 SP1 to .NET 4. I expected the upgrade to be pretty smooth with very few, if any, upgrade issues. To my delight, the upgrade wizard said that everything upgraded without a problem. I thought I was home free, until I decided to build and run the application. A big problem was staring me in the face -- I couldn't log on. Our product is using a custom ASP.NET Membership Provider, but essentially it's a modified SqlMembershipProvider with some additional properties. And my login was failing during the OnAuthenticate event handler of my ASP.NET Login control, right where it was calling my provider's ValidateUser method. After a little digging, it turns out that the password hash that the membership provider was using to compare against the stored password hash in the membership database tables was different. I compared the password hash from the .NET 4 code line, and it was a different generated hash than my .NET 3.5 code line. (Tip -- when upgrading, always keep a valid debug copy of your app handy in case you have to step through a lot of code.) So it was a strange situation, but at least I knew what the problem was. Now the question was, "Why was it happening?" Turns out that a breaking change in .NET 4 is that the default hash algorithm changed to SHA256. Hey, that's great -- stronger hashing algorithm. But what do I do with all the hashed passwords in my database that were created using SHA1? Well, you can make two quick changes to your app's web.config and everything will be OK. Basically, you need to override the default HashAlgorithmTypeproperty of your membership provider. Here are the two places to do that: 1. At the beginning of your element, add the following element: <system.web> <machineKey validation="SHA1" /> ... </system.web> 2. On your element under , add the following hashAlgorithmType attribute: <system.web> <membership defaultProvider="myMembership" hashAlgorithmType="SHA1"> ... </system.web> After that, you should be good to go! Hope this helps.

    Read the article

  • Why CoffeeScript is tough to maintain

    - by Renso
    I recently started trying out CoffeeScript only to find out that it caused more headaches. The abstraction level of jQuery was perfect, it did not dictate to coders how to design their code, it just works. However, I recently posted a request to the CoffeeScript team to consider introducing curly braces to help with more complex code to control the flow of logic. For example a if-then-else with many nested levels can be near impossible to debug without tracing through it when using CoffeeScript. Also with IDEs like Visual Studio, regular JavaScript intellicense and auto-formatting make it easy to appropriate indent nested levels without any work on the part of the developer and reading it is not that hard, especially with some extensions that show vertical lines in the code editor to help see what is nested within what part of the code.However with CoffeeScript that is not the case. The samples given in the CoffeeScript web site are of course just simple examples to explain the features and one gets excited pretty quick over the powerful shortcuts. I tried to convert a piece of JavaScript over to CoffeeScript and gave up since you need to first of all remove ALL non CoffeeScript coding constructs for it to even compile. However js2coffee can help with that. However to keep track of nested levels became something that was simply not manageable using CoffeeScript.Furthermore, any coding language that controls the flow of logic by indentation is extremely dangerous for obvious reasons. I liked CoffeeScript a lot, but the fact that the logical flow of the code is controlled by how much you indent code, spaces or tabs, is not reliable as there is no way the programmer has an easy way of knowing what parts of the code will get hit when the code spans a page.When I suggested introducing curly braces in CoffeeScript the team, one contributor advised me that my code needs to be re-designed! Needless to say that is absurd. When I included a piece of the code he asked my if it was legacy code. It's like saying to a Java programmer, sorry you cannot use Java because we don't agree with how you write your code.jashkenas from the CoffeeScript blog gave some great suggestions and made the point that introducing curly braces would be very problematic for them as they use them to denote objects. Makes sense, but I would still love to see some way to replace code flow control with spaces and indentation to something more concrete and human readable.

    Read the article

  • Stop Saying "Multi-Channel!"

    - by David Dorf
    I keep hearing the term "multi-channel" in our industry, but its time to move on. It kinda reminds me of the term "ECR" or electronic cash register. Long ago ECR was a leading-edge term, but nowadays its rarely used because its table-stakes. After all, what cash register today isn't electronic? The same logic applies to multi-channel, at least when we're talking about tier-1 and tier-2 retailers. If you're still talking about multi-channel retailing, you're in big trouble. Some have switched over to the term "cross-channel," and that's a step in the right direction but still falls short. Its kinda like saying, "I upgraded my ECR to accept debit cards!" Yawn. Who hasn't? Today's retailers need to focus on omni-channel, which I first heard from my friends over at RSR but was originally coined at IDC. First retailers added e-commerce to their store and catalog channels yielding multi-channel retailing. Consumers could use the channel that worked best for them. Then some consumers wanted to combine channels with features like buy-on-the-Web, pickup-in-the-store. Thus began the cross-channel initiatives to breakdown the silos and enable the channels to communicate with each other. But the multi-channel architecture is full of duplication that thwarts efforts of providing a consistent experience. Each has its own cart, its own pricing, and often its own CRM. This was an outcrop of trying to bring the independent channels to market quickly. Rather than reusing and rebuilding existing components to meet the new demands, silos were created that continue to exist today. Today's consumers want omni-channel retailing. They want to interact with brands in a consistent manner that is channel transparent, yet optimized for that particular interaction. The diagram below, from the soon-to-be-released NRF Mobile Blueprint v2, shows this progression. For retailers to provide an omni-channel experience, there needs to be one logical representation of products, prices, promotions, and customers across all channels. The only thing that varies is the presentation of the content based on the delivery mechanism (e.g. shelf labels, mobile phone, web site, print, etc.) and often these mechanisms can be combined in various ways. I'm looking forward to the day in which I can use my phone to scan QR-codes in a catalog to create a shopping cart of items. Then do some further research on the retailer's Web site and be told about related items that might interest me. Be able to easily solicit opinions and reviews from social sites, and finally enter the store to pickup my items, knowing that any applicable coupons have been applied. In this scenario, I the consumer are dealing with a single brand that is aware of me and my needs throughout the entire transaction. Nirvana.

    Read the article

  • Referencing a picture in another DLL in Silverlight and Windows Phone 7

    - by Laurent Bugnion
    This one has burned me a few times, so here is how it works for future reference: Usually, when I add an Image control into a Silverlight application, and the picture it shows is local (as opposed to loaded from the web), I set the picture’s Build Action to Content, and the Copy to Output Directory to Copy if Newer. What the compiler does then is to copy the picture to the bin\Debug folder, and then to pack it into the XAP file. In XAML, the syntax to refer to this local picture is: <Image Source="/Images/mypicture.jpg" Width="100" Height="100" /> And in C#: return new BitmapImage(new Uri( "/Images/mypicture.jpg", UriKind.Relative)); One of the features of Silverlight is to allow referencing content (pictures, resource dictionaries, sound files, movies etc…) located in a DLL directly. This is very handy because just by using the right syntax in the URI, you can do this in XAML directly, for example with: <Image Source="/MyApplication;component/Images/mypicture.jpg" Width="100" Height="100" /> In C#, this becomes: return new BitmapImage(new Uri( "/MyApplication;component/Images/mypicture.jpg", UriKind.Relative)); Side note: This kind of URI is called a pack URI and they have been around since the early days of WPF. There is a good tutorial about pack URIs on MSDN. Even though it refers to WPF, it also applies to Silverlight Side note 2: With the Build Action set to Content, you can rename the XAP file to ZIP, extract all the files, change the picture (but keep the same name), rezip the whole thing and rename again to XAP. This is not possible if the picture is embedded in an assembly! So what’s the catch? Well the catch is that this does not work if you set the Build Action to Content. It’s actually pretty simple to explain: The pack URI above tells the Silverlight runtime to look within an assembly named MyOtherAssembly for a file named MyPicture.jpg in the Images folder. If the file is included as Content, however, it is not in the assembly. Silverlight does not find it, and silently returns nothing. The image is not displayed. And the fix? The fix, for class libraries, is to set the Build Action to Resource. With this, the picture will gets packed into the DLL itself. Of course, this will increase the size of the DLL, and any change to the picture will require recompiling the class library, which is not ideal. But in the cases where you want to distribute pictures (icons etc) together with a plug-in assembly, well, this is a good way to have everything in the same place Happy coding, Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Two interfaces with identical signatures

    - by corsiKa
    I am attempting to model a card game where cards have two important sets of features: The first is an effect. These are the changes to the game state that happen when you play the card. The interface for effect is as follows: boolean isPlayable(Player p, GameState gs); void play(Player p, GameState gs); And you could consider the card to be playable if and only if you can meet its cost and all its effects are playable. Like so: // in Card class boolean isPlayable(Player p, GameState gs) { if(p.resource < this.cost) return false; for(Effect e : this.effects) { if(!e.isPlayable(p,gs)) return false; } return true; } Okay, so far, pretty simple. The other set of features on the card are abilities. These abilities are changes to the game state that you can activate at-will. When coming up with the interface for these, I realized they needed a method for determining whether they can be activated or not, and a method for implementing the activation. It ends up being boolean isActivatable(Player p, GameState gs); void activate(Player p, GameState gs); And I realize that with the exception of calling it "activate" instead of "play", Ability and Effect have the exact same signature. Is it a bad thing to have multiple interfaces with an identical signature? Should I simply use one, and have two sets of the same interface? As so: Set<Effect> effects; Set<Effect> abilities; If so, what refactoring steps should I take down the road if they become non-identical (as more features are released), particularly if they're divergent (i.e. they both gain something the other shouldn't, as opposed to only one gaining and the other being a complete subset)? I'm particularly concerned that combining them will be non-sustainable as soon as something changes. The fine print: I recognize this question is spawned by game development, but I feel it's the sort of problem that could just as easily creep up in non-game development, particularly when trying to accommodate the business models of multiple clients in one application as happens with just about every project I've ever done with more than one business influence... Also, the snippets used are Java snippets, but this could just as easily apply to a multitude of object oriented languages.

    Read the article

  • Calling functions from different classes

    - by A Ron Hubbard Clevenger
    I'm writing a program and I'm supposed to check and see if a certain object is in the list before I call it. I set up the contains() method which is supposed to use the equals() method of the Comparable interface I implemented on my Golfer class but it doesn't seem to call it (I put print statements in to check). I can't seem to figure out whats wrong with the code, the ArrayUnsortedList class I'm using to go through the list even uses the correct toString() method I defined in my Golfer class but for some reason it won't use the equals() method I implemented. //From "GolfApp.java" public class GolfApp{ ListInterface <Golfer>golfers = new ArraySortedList<Golfer> (20); Golfer golfer; //..*snip*.. if(this.golfers.contains(new Golfer(name,score))) System.out.println("The list already contains this golfer"); else{ this.golfers.add(this.golfer = new Golfer(name,score)); System.out.println("This golfer is already on the list"); } //From "ArrayUnsortedList.java" protected void find(T target){ location = 0; found = false; while (location < numElements){ if (list[location].equals(target)) //Where I think the problem is { found = true; return; } else location++; } } public boolean contains(T element){ find(element); return found; } //From "Golfer.java" public class Golfer implements Comparable<Golfer>{ //..irrelavant code sniped..// public boolean equals(Golfer golfer) { String thisString = score + ":" + name; String otherString = golfer.getScore() + ":" + golfer.getName() ; System.out.println("Golfer.equals() has bee called"); return thisString.equalsIgnoreCase(otherString); } public String toString() { return (score + ":" + name); } My main problem seems to be getting the find function of the ArrayUnsortedList to call my equals function in the find() part of the List but I'm not exactly sure why, like I said when I have it printed out it works with the toString() method I implemented perfectly. I'm almost positive the problem has to do with the find() function in the ArraySortedList not calling my equals() method. I tried using some other functions that relied on the find() method and got the same results.

    Read the article

  • Ubuntu 12.04 Automounting ntfs partition

    - by kuzyt
    Ive looked everywhere to fix this problem but I cant seem to figure out why its doing this. I have the following /etc/fstab entry to mount a ntfs partition using ntfs-3g. UUID=01CD842715EC2180 /media/mediahd02 ntfs defaults,user,noexec,uid=1000,gid=1000,dmask=007,fmask=117 0 2 The volume label for this partition is "MEDIA02" So I have had no problems with the fstab mounting. The problem however is that it automounts again using MEDIA02 label. I'm not sure automounting is the right term for this as its just an empty directory. Deleting this directory and rebooting is causing it to appear again. So listing /media I see both MEDIA02 & mediahd02 htpc@htpc:~$ cat /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sdf1 during installation UUID=ec027544-b0e7-4145-99a4-905543a9781a / ext4 errors=remount-ro,noatime,discard 0 1 # swap was on /dev/sdf5 during installation UUID=1794409e-723f-41ac-9f31-ae059f377613 none swap sw 0 0 # Added all the lines below this tmpfs /tmp tmpfs defaults,noatime,mode=1777 0 0 UUID=0F70-3B06 /media/mediahd01 vfat defaults,user,noexec,uid=1000,gid=1000,dmask=007,fmask=117 0 2 UUID=01CD842715EC2180 /media/mediahd02 ntfs defaults,user,noexec,uid=1000,gid=1000,dmask=007,fmask=117 0 2 htpc@htpc:~$ cat /etc/mtab /dev/sdc1 / ext4 rw,noatime,errors=remount-ro,discard 0 0 proc /proc proc rw,noexec,nosuid,nodev 0 0 sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0 none /sys/fs/fuse/connections fusectl rw 0 0 none /sys/kernel/debug debugfs rw 0 0 none /sys/kernel/security securityfs rw 0 0 udev /dev devtmpfs rw,mode=0755 0 0 devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0 tmpfs /tmp tmpfs rw,noatime,mode=1777 0 0 tmpfs /run tmpfs rw,noexec,nosuid,size=10%,mode=0755 0 0 none /run/lock tmpfs rw,noexec,nosuid,nodev,size=5242880 0 0 none /run/shm tmpfs rw,nosuid,nodev 0 0 /dev/sdc1 /media/usbhd-sdc1 ext4 rw,relatime 0 0 /dev/sdb1 /media/mediahd02 fuseblk rw,noexec,nosuid,nodev,allow_other,default_permissions,blksize=4096 0 0 /dev/sda5 /media/mediahd01 vfat rw,noexec,nosuid,nodev,uid=1000,gid=1000,dmask=007,fmask=117 0 0 /dev/sdh1 /media/Windows_7 fuseblk rw,nosuid,nodev,allow_other,blksize=4096 0 0 Can someone shed some light as to why its doing this ?

    Read the article

  • Default values - are they good or evil?

    - by Andrew
    The question about default values in general - default return function values, default parameter values, default logic for when something is missing, default logic for handling exceptions, default logic for handling the edge conditions etc. For a long time I considered default values to be a "pure evil" thing, something that "cloaks the catastrophe" and results in a very hard do find bugs. But recently I started to think about default values as some sort of a technical debt ... which is not a straight bad thing but something that could provide some "short term financing" get us to survive the project (how many of us could afford to buy a house without taking out the mortgage?). When I say a "short term" - I don't mean - "do something quickly first and do refactor it out later before it hits the production". No - I am talking about relying on a hardcoded default values in a production software. Granted - it could cause some issues, but what if it only going to cause a single trouble in a whole year. Again - I am talking about the "average" mainstream software here (not a software for a nuclear power station) - the average web site or a UI application for the accounting software, meaning that people lives are not at stake, nor millions of dollars. Again, from my experience, business users would rather live with the software which "works somehow", rather then wait for a perfect one. And the use of default values helps a lot if you develop a software in a RAD style. But again - the longest debug sessions I have spent were because of the bugs introduced by a default value which either stopped being "a default" along the way or because a small subsystem has recently been upgraded and as a result of this upgrade it does not handle the default correctly (e.g. empty list vs null, or null string vs empty string). So my question is - are the default values good or evil. And if they are a technical debt - how do measure up how much you can borrow so you can afford the repayments? Would really appreciate any input. Cheers. EDIT: If I am using the default values as a way to cut the corners during the development - and if the corners cutting results in a bugs and issues - what is the methodology to recover from these issues?

    Read the article

  • NuGet JustMock

    - by mehfuzh
    As most of us already know JustMock got  a free edition. The free edition is not a stripped down of the features of the full edition but I would rather say its a strip down of the type you can mock. Technically, free version runs on  proxy as full version runs on proxy + profiler. In full version, It switches to profiler when you are mocking final methods or sealed class or anything else that can not be done using inheritance. Like in full version you can mock non public methods , in free version you can still do it but it has to be virtual for protected or must be done through InternalsVisibleTo attribute for internal virtual methods (If you have access to the source and can apply the attribute). Now, you can get a copy of free edition from the product page. Install it and off you go. But it is also exposed to NuGet. Those of you are not familiar with NuGet (that will be odd). But still NuGet is the centralized package manager from Microsoft that cuts the workflow of manual inclusion of  libraries in your project. I think NuGet in future will limit the scope of  “.vsi” packages and installers because of its ease (except in some cases). Its similar to ruby gems. In ruby, virtually you can install any library in this way “gems  install <target_library>” and you are off to go. It will check the dependencies, install them or less prompt with the steps you need to do.   Now sticking to the post, to get started you first need to install NuGet package manager. Once you have completed the step pressing “Ctrl + W, Ctrl + Z” it will bring up an console like one below:   Once you are here, you just have to type “install-package justmock” Next, it will should print the confirmation when the installation is complete: Moving to visual studio solution explorer, you will now see:   Finally, NuGet is still in its early ages and steps that are shown here may not remain the same in coming releases, but feel free to enjoy what is out there right now. Regarding JustMock free edition, there is a nice post by Phil Japikse at Introducing JustMock Free Edition. I think its worth checking if not already.   Have fun and happy holidays!

    Read the article

  • Intel Centrino Wireless N 1000 doesn't work on a Lenovo Z560

    - by Timetraveler
    I upgraded my Ubuntu 11.04 to 11.10 and my Wifi has stopped working. I have a Lenovo Z560 that has Intel centrino wireless-N 1000. I have searched various threads having similar problems for a solution and have no success. The wlan0 is not even showing up in rfkill. Please help me find a solution. I am giving below the output of various debug commands. Thanks in advance. DISTRIB_ID=Ubuntu DISTRIB_RELEASE=11.10 DISTRIB_CODENAME=oneiric DISTRIB_DESCRIPTION="Ubuntu 11.10" ----##uname -a Linux gurucharapathy-laptop 3.0.0-17-generic-pae #30-Ubuntu SMP Thu Mar 8 17:53:35 UTC 2012 i686 i686 i386 GNU/Linux ----##lspci -nnk | grep -iA2 net 05:00.0 Network controller [0280]: Intel Corporation Centrino Wireless-N 1000 [8086:0084] Subsystem: Intel Corporation Centrino Wireless-N 1000 BGN [8086:1315] Kernel modules: iwlagn 06:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller [10ec:8136] (rev 02) Subsystem: Lenovo Device [17aa:392e] Kernel driver in use: r8169 ----##iwconfig lo no wireless extensions. eth0 no wireless extensions. ----##iwlist scan lo Interface doesn't support scanning. eth0 Interface doesn't support scanning. ----##rfkill list all 0: hci0: Bluetooth Soft blocked: no Hard blocked: no 1: ideapad_wlan: Wireless LAN Soft blocked: no Hard blocked: no 2: ideapad_bluetooth: Bluetooth Soft blocked: no Hard blocked: no ----##lsmod Module Size Used by rfcomm 38408 8 bnep 17923 2 parport_pc 32114 0 ppdev 12849 0 binfmt_misc 17292 1 snd_hda_codec_hdmi 31426 1 snd_hda_codec_conexant 52460 1 uvcvideo 67271 0 videodev 85626 1 uvcvideo snd_hda_intel 28358 2 snd_hda_codec 91859 3 snd_hda_codec_hdmi,snd_hda_codec_conexant,snd_hda_intel snd_hwdep 13276 1 snd_hda_codec joydev 17393 0 snd_pcm 80435 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec snd_seq_midi 13132 0 i915 509554 9 drm_kms_helper 32889 1 i915 snd_rawmidi 25241 1 snd_seq_midi snd_seq_midi_event 14475 1 snd_seq_midi snd_seq 51567 2 snd_seq_midi,snd_seq_midi_event snd_timer 28932 2 snd_pcm,snd_seq drm 196290 5 i915,drm_kms_helper snd_seq_device 14172 3 snd_seq_midi,snd_rawmidi,snd_seq mei 36466 0 mac80211 393421 0 snd 55902 14 snd_hda_codec_hdmi,snd_hda_codec_conexant,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device ideapad_laptop 13575 0 intel_ips 17753 0 btusb 18160 2 i2c_algo_bit 13199 1 i915 soundcore 12600 1 snd bluetooth 148839 23 rfcomm,bnep,btusb cfg80211 172427 1 mac80211 psmouse 63474 0 serio_raw 12990 0 snd_page_alloc 14108 2 snd_hda_intel,snd_pcm sparse_keymap 13658 1 ideapad_laptop wmi 18744 0 video 18908 1 i915 lp 17455 0 parport 40930 3 parport_pc,ppdev,lp ahci 21634 2 libahci 25761 1 ahci r8169 47200 0 ----##nm-tool NetworkManager Tool State: asleep Device: eth0 ----------------------------------------------------------------- Type: Wired Driver: r8169 State: unmanaged Default: no HW Address: 88:AE:1D:DE:5F:9C Capabilities: Carrier Detect: yes Speed: 100 Mb/s Wired Properties Carrier: on ----##lshw -C network *-network UNCLAIMED description: Network controller product: Centrino Wireless-N 1000 vendor: Intel Corporation physical id: 0 bus info: pci@0000:05:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:d6400000-d6401fff *-network description: Ethernet interface product: RTL8101E/RTL8102E PCI Express Fast Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:06:00.0 logical name: eth0 version: 02 serial: 88:ae:1d:de:5f:9c size: 100Mbit/s capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=N/A ip=192.168.0.100 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s resources: irq:41 ioport:2000(size=256) memory:d2410000-d2410fff memory:d2400000-d240ffff memory:d2420000-d243ffff

    Read the article

  • Upgarde from Asp.Net MVC 1 to MVC 2 - how to and issues with JsonRequestBehavior

    - by Renso
    Goal Upgrade your MVC 1 app to MVC 2 Issues You may get errors about your Json data being returned via a GET request violating security principles - we also address this here. This post is not intended to delve into why the Json GET request is or may be an issue, just how to resolve it as part of upgrading from MVC1 to 2. Solution First remove all references from your projects to the MVC 1 dll and replace it with the MVC 2 dll. Now update your web.config file in your web app root folder by simply changing references to assembly="System.Web.Mvc, Version 1.0.0.0 to Version 2.0.0.0, there are a couple of references in your config file, here are probably most of them you may have:         <compilation debug="true" defaultLanguage="c#">             <assemblies>                        <add assembly="System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />             </assemblies>         </compilation>           <pages masterPageFile="~/Views/Masters/CRMTemplate.master" pageParserFilterType="System.Web.Mvc.ViewTypeParserFilter, System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" pageBaseType="System.Web.Mvc.ViewPage, System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" userControlBaseType="System.Web.Mvc.ViewUserControl, System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" validateRequest="False">             <controls>                 <add assembly="System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" namespace="System.Web.Mvc" tagPrefix="mvc" />   Secondly, if you return Json objects from an ajax call via the GET method you ahve several options to fix this depending on your situation: 1. The simplest, as in my case I did this for an internal web app, you may simply do:             return Json(myObject, JsonRequestBehavior.AllowGet);   2. In Mvc if you have a controller base you could wrap the Json method with:         public new JsonResult Json(object data)         {             return Json(data, "application/json", JsonRequestBehavior.AllowGet);                    }   3. The most work would be to decorate your Actions with:         [AcceptVerbs(HttpVerbs.Get)]   4. Another tnat is also a lot of work that needs to be done to every ajax call returning Json is:                             msg = $.ajax({ url: $('#ajaxGetSampleUrl').val(), dataType: 'json', type: 'POST', async: false, data: { name: theClass }, success: function(data, result) { if (!result) alert('Failure to retrieve the Sample Data.'); } }).responseText;   This should cover all the issues you may run into when upgrading. Let me kow if you run into any other ones.

    Read the article

  • Apache - create multiple aliases

    - by mc3mcintyre
    I'm trying to setup two websites on my Apache server. One is www.domain.com and the other is test.domain.com. Currently, my 000-default.conf file reads as follows: <VirtualHost www:80> # The ServerName directive sets the request scheme, hostname and port that # the server uses to identify itself. This is used when creating # redirection URLs. In the context of virtual hosts, the ServerName # specifies what hostname must appear in the request's Host: header to # match this virtual host. For the default virtual host (this file) this # value is not decisive as it is used as a last resort host regardless. # However, you must set it for any further virtual host explicitly. #ServerName www.domain.com #ServerAlias www ServerAdmin [email protected] DocumentRoot /var/www/domain.com/ # Available loglevels: trace8, ..., trace1, debug, info, notice, warn, # error, crit, alert, emerg. # It is also possible to configure the loglevel for particular # modules, e.g. #LogLevel info ssl:warn ErrorLog ${APACHE_LOG_DIR}/domain.error.log CustomLog ${APACHE_LOG_DIR}/domain.access.log combined UseCanonicalName on allow from all Options +Indexes # For most configuration files from conf-available/, which are # enabled or disabled at a global level, it is possible to # include a line for only one particular virtual host. For example the # following line enables the CGI configuration for this host only # after it has been globally disabled with "a2disconf". #Include conf-available/serve-cgi-bin.conf </VirtualHost> <VirtualHost test:80> DocumentRoot "/var/www/domain.com/test/" ServerName test.domain.com ServerAdmin [email protected] ErrorLog ${APACHE_LOG_DIR}/test.domain.error.log CustomLog ${APACHE_LOG_DIR}/test.domain.access.log combined UseCanonicalName on allow from all Options +Indexes </VirtualHost> # vim: syntax=apache ts=4 sw=4 sts=4 sr noet As is, when I use a browser to go to the www location, it show me a directory listing. However, if I remove the www:80 on Line 1 and replace it with *:80, it correctly displays the webpage. I don't understand why. Can anyone help me configure this 000-default.conf file so that www goes to "/var/www/domain.com" and that test goes to "/var/www/domain.com/test"? Thank you.

    Read the article

  • The most dangerous SQL Script in the world!

    - by DrJohn
    In my last blog entry, I outlined how to automate SQL Server database builds from concatenated SQL Scripts. However, I did not mention how I ensure the database is clean before I rebuild it. Clearly a simple DROP/CREATE DATABASE command would suffice; but you may not have permission to execute such commands, especially in a corporate environment controlled by a centralised DBA team. However, you should at least have database owner permissions on the development database so you can actually do your job! Then you can employ my universal "drop all" script which will clear down your database before you run your SQL Scripts to rebuild all the database objects. Why start with a clean database? During the development process, it is all too easy to leave old objects hanging around in the database which can have unforeseen consequences. For example, when you rename a table you may forget to delete the old table and change all the related views to use the new table. Clearly this will mean an end-user querying the views will get the wrong data and your reputation will take a nose dive as a result! Starting with a clean, empty database and then building all your database objects using SQL Scripts using the technique outlined in my previous blog means you know exactly what you have in your database. The database can then be repopulated using SSIS and bingo; you have a data mart "to go". My universal "drop all" SQL Script To ensure you start with a clean database run my universal "drop all" script which you can download from here: 100_drop_all.zip By using the database catalog views, the script finds and drops all of the following database objects: Foreign key relationships Stored procedures Triggers Database triggers Views Tables Functions Partition schemes Partition functions XML Schema Collections Schemas Types Service broker services Service broker queues Service broker contracts Service broker message types SQLCLR assemblies There are two optional sections to the script: drop users and drop roles. You may use these at your peril, particularly as you may well remove your own permissions! Note that the script has a verbose mode which displays the SQL commands it is executing. This can be switched on by setting @debug=1. Running this script against one of the system databases is certainly not recommended! So I advise you to keep a USE database statement at the top of the file. Good luck and be careful!!

    Read the article

  • glGetActiveAttrib on Android NDK

    - by user408952
    In my code-base I need to link the vertex declarations from a mesh to the attributes of a shader. To do this I retrieve all the attribute names after linking the shader. I use the following code (with some added debug info since it's not really working): int shaders[] = { m_ps, m_vs }; if(linkProgram(shaders, 2)) { ASSERT(glIsProgram(m_program) == GL_TRUE, "program is invalid"); int attrCount = 0; GL_CHECKED(glGetProgramiv(m_program, GL_ACTIVE_ATTRIBUTES, &attrCount)); int maxAttrLength = 0; GL_CHECKED(glGetProgramiv(m_program, GL_ACTIVE_ATTRIBUTE_MAX_LENGTH, &maxAttrLength)); LOG_INFO("shader", "got %d attributes for '%s' (%d) (maxlen: %d)", attrCount, name, m_program, maxAttrLength); m_attrs.reserve(attrCount); GLsizei attrLength = -1; GLint attrSize = -1; GLenum attrType = 0; char tmp[256]; for(int i = 0; i < attrCount; i++) { tmp[0] = 0; GL_CHECKED(glGetActiveAttrib(m_program, GLuint(i), sizeof(tmp), &attrLength, &attrSize, &attrType, tmp)); LOG_INFO("shader", "%d: %d %d '%s'", i, attrLength, attrSize, tmp); m_attrs.append(String(tmp, attrLength)); } } GL_CHECKED is a macro that calls the function and calls glGetError() to see if something went wrong. This code works perfectly on Windows 7 using ANGLE and gives this this output: info:shader: got 2 attributes for 'static/simplecolor.glsl' (3) (maxlen: 11) info:shader: 0: 7 1 'a_Color' info:shader: 1: 10 1 'a_Position' But on my Nexus 7 (1st gen) I get the following (the errors are the output from the GL_CHECKED macro): I/testgame:shader(30865): got 2 attributes for 'static/simplecolor.glsl' (3) (maxlen: 11) E/testgame:gl(30865): 'glGetActiveAttrib(m_program, GLuint(i), sizeof(tmp), &attrLength, &attrSize, &attrType, tmp)' failed: INVALID_VALUE [jni/src/../../../../src/Game/Asset/ShaderAsset.cpp:50] I/testgame:shader(30865): 0: -1 -1 '' E/testgame:gl(30865): 'glGetActiveAttrib(m_program, GLuint(i), sizeof(tmp), &attrLength, &attrSize, &attrType, tmp)' failed: INVALID_VALUE [jni/src/../../../../src/Game/Asset/ShaderAsset.cpp:50] I/testgame:shader(30865): 1: -1 -1 '' I.e. the call to glGetActiveAttrib gives me an INVALID_VALUE. The opengl docs says this about the possible errors: GL_INVALID_VALUE is generated if program is not a value generated by OpenGL. This is not the case, I added an ASSERT to make sure glIsProgram(m_program) == GL_TRUE, and it doesn't trigger. GL_INVALID_OPERATION is generated if program is not a program object. Different error. GL_INVALID_VALUE is generated if index is greater than or equal to the number of active attribute variables in program. i is 0 and 1, and the number of active attribute variables are 2, so this isn't the case. GL_INVALID_VALUE is generated if bufSize is less than 0. Well, it's not zero, it's 256. Does anyone have an idea what's causing this? Am I just lucky that it works in ANGLE, or is the nvidia tegra driver wrong?

    Read the article

  • Unexpected results for projection on to plane

    - by ravenspoint
    I want to use this projection matrix: GLfloat shadow[] = { -1,0,0,0, 1,0,-1,1, 0,0,-1,0, 0,0,0,-1 }; It should cast object shadows onto the y = 0 plane from a point light at 1,1,-1. I create a rectangle in the x = 0.5 plane glBegin( GL_QUADS ); glVertex3f( 0.5,0.2,-0.5); glVertex3f( 0.5,0.2,-1.5); glVertex3f( 0.5,0.5,-1.5); glVertex3f( 0.5,0.5,-0.5); glEnd(); Now if I manually multiply these vertices with the matrix, I get. glBegin( GL_QUADS ); glVertex3f( 0.375,0,-0.375); glVertex3f( 0.375,0,-1.625); glVertex3f( 0,0,-2); glVertex3f( 0,0,0); glEnd(); Which produces a reasonable display ( camera at 0,5,0 looking down y axis ) So rather than do the calculation manually, I should be able to use the opengl model transormation. I write this code: glMatrixMode (GL_MODELVIEW); GLfloat shadow[] = { -1,0,0,0, 1,0,-1,1, 0,0,-1,0, 0,0,0,-1 }; glLoadMatrixf( shadow ); glBegin( GL_QUADS ); glVertex3f( 0.5,0.2,-0.5); glVertex3f( 0.5,0.2,-1.5); glVertex3f( 0.5,0.5,-1.5); glVertex3f( 0.5,0.5,-0.5); glEnd(); But this produces a blank screen! What am I doing wrong? Is there some debug mode where I can print out the transformed vertices, so I can see where they are ending up? Note: People have suggested that using glMultMatrixf() might make a difference. It doesn't. Replacing glLoadMatrixf( shadow ); with glLoadIdentity(); glMultMatrixf( shadow ); gives the identical result ( of course! )

    Read the article

  • OpenGLES GLSL Shader attributes always bound to 0

    - by codemonkey
    So I have a very simple vertex shader as follows #version 120 attribute vec3 position; attribute vec3 inColor; uniform mat4 mvp; varying vec3 fragColor; void main(void){ fragColor = inColor; gl_Position = mvp * vec4(position, 1.0); } Which I load, as well as the fragment shader: #version 120 varying vec3 fragColor; void main(void) { gl_FragColor = vec4(fragColor,1.0); } Which I then load, compile, and link to my shader program. I check for link status using glGetProgramiv(shaderProgram, GL_LINK_STATUS, &shaderSuccess); which returns GL_TRUE so I think its ok. However, when I query the active attributes and uniforms using #ifdef DEBUG int totalAttributes = -1; glGetProgramiv(shaderProgram, GL_ACTIVE_ATTRIBUTES, &totalAttributes); for(int i=0; i<totalAttributes; ++i) { int name_len=-1, num=-1; GLenum type = GL_ZERO; char name[100]; glGetActiveAttrib(shaderProgram, GLuint(i), sizeof(name)-1, &name_len, &num, &type, name ); name[name_len] = 0; GLuint location = glGetAttribLocation(shaderProgram, name); fprintf(stderr, "Attribute %s is bound at %d\n", name, location); } int totalUniforms = -1; glGetProgramiv(shaderProgram, GL_ACTIVE_UNIFORMS, &totalUniforms); for(int i=0; i<totalUniforms; ++i) { int name_len=-1, num=-1; GLenum type = GL_ZERO; char name[100]; glGetActiveUniform(shaderProgram, GLuint(i), sizeof(name)-1, &name_len, &num, &type, name ); name[name_len] = 0; GLuint location = glGetUniformLocation(shaderProgram, name); fprintf(stderr, "Uniform %s is bound at %d\n", name, location); } #endif I get: Attribute inColor is bound at 0 Attribute position is bound at 1 Uniform mvp is bound at 0 Which leads to failure when trying to use the shader to render the objects. I have tried switching the order of declaration of position & inColor, but still, only position is bound with the other two giving 0 Can someone please explain why this is happening? Thanks

    Read the article

  • Using INotifyPropertyChanged in background threads

    - by digitaldias
    Following up on a previous blog post where I exemplify databinding to objects, a reader was having some trouble with getting the UI to update. Here’s the rough UI: The idea is, when pressing Start, a background worker process starts ticking at the specified interval, then proceeds to increment the databound Elapsed value. The problem is that event propagation is limeted to current thread, meaning, you fire an event in one thread, then other threads of the same application will not catch it. The Code behind So, somewhere in my ViewModel, I have a corresponding bethod Start that initiates a background worker, for example: public void Start( ) { BackgroundWorker backgroundWorker = new BackgroundWorker( ); backgroundWorker.DoWork += IncrementTimerValue; backgroundWorker.RunWorkerAsync( ); } protected void IncrementTimerValue( object sender, DoWorkEventArgs e ) { do { if( this.ElapsedMs == 100 ) this.ElapsedMs = 0; else this.ElapsedMs++; }while( true ); } Assuming that there is a property: public int ElapsedMs { get { return _elapsedMs; } set { if( _elapsedMs == value ) return; _elapsedMs = value; NotifyThatPropertyChanged( "ElapsedMs" ); } } The above code will not work. If you step into this code in debug, you will find that INotifyPropertyChanged is called, but it does so in a different thread, and thus the UI never catches it, and does not update. One solution Knowing that the background thread updates the ElapsedMs member gives me a chance to activate BackgroundWorker class’ progress reporting mechanism to simply alert the main thread that something has happened, and that it is probably a good idea to refresh the ElapsedMs binding. public void Start( ) { BackgroundWorker backgroundWorker = new BackgroundWorker( ); backgroundWorker.DoWork += IncrementTimerValue; // Listen for progress report events backgroundWorker.WorkerReportsProgress = true; // Tell the UI that ElapsedMs needs to update backgroundWorker.RunWorkerCompleted += ( sender, e ) => { NotifyThatPropertyChanged( "ElapsedMs" ) }; backgroundWorker.RunWorkerAsync( ); } protected void IncrementTimerValue( object sender, DoWorkEventArgs e ) { do { if( this.ElapsedMs == 100 ) this.ElapsedMs = 0; else this.ElapsedMs++; // report any progress ( sender as BackgroundWorker ).ReportProgress( 0 ); }while( true ); } What happens above now is that I’ve used the BackgroundWorker cross thread mechanism to alert me of when it is ok for the UI to update it’s ElapsedMs field. Because the property itself is being updated in a different thread, I’m removing the NotifyThatPropertyChanged call from it’s Set method, and moving that responsability to the anonymous method that I created in the Start method. This is one way of solving the issue of having a background thread update your UI. I would be happy to hear of other cross-threading mechanisms for working in a MCP/MVC/MVVM pattern.

    Read the article

  • More on PHP and Oracle 11gR2 Improvements to Client Result Caching

    - by christopher.jones
    Oracle 11.2 brought several improvements to Client Result Caching. CRC is way for the results of queries to be cached in the database client process for reuse.  In an Oracle OpenWorld presentation "Best Practices for Developing Performant Application" my colleague Luxi Chidambaran had a (non-PHP generated) graph for the Niles benchmark that shows a DB CPU reduction up to 600% and response times up to 22% faster when using CRC. Sometimes CRC is called the "Consistent Client Cache" because Oracle automatically invalidates the cache if table data is changed.  This makes it easy to use without needing application logic rewrites. There are a few simple database settings to turn on and tune CRC, so management is also easy. PHP OCI8 as a "client" of the database can use CRC.  The cache is per-process, so plan carefully before caching large data sets.  Tables that are candidates for caching are look-up tables where the network transfer cost dominates. CRC is really easy in 11.2 - I'll get to that in a moment.  It was also pretty easy in Oracle 11.1 but it needed some tiny application changes.  In PHP it was used like: $s = oci_parse($c, "select /*+ result_cache */ * from employees"); oci_execute($s, OCI_NO_AUTO_COMMIT); // Use OCI_DEFAULT in OCI8 <= 1.3 oci_fetch_all($s, $res); I blogged about this in the past.  The query had to include a specific hint that you wanted the results cached, and you needed to turn off auto committing during execution either with the OCI_DEFAULT flag or its new, better-named alias OCI_NO_AUTO_COMMIT.  The no-commit flag rule didn't seem reasonable to me because most people wouldn't be specific about the commit state for a query. Now in Oracle 11.2, DBAs can now nominate tables for caching, either with CREATE TABLE or ALTER TABLE.  That means you don't need the query hint anymore.  As well, the no-commit flag requirement has been lifted.  Your code can now look like: $s = oci_parse($c, "select * from employees"); oci_execute($s); oci_fetch_all($s, $res); Since your code probably already looks like this, your DBA can find the top queries in the database and simply tune the system by turning on CRC in the database and issuing an ALTER TABLE statement for candidate tables.  Voila. Another CRC improvement in Oracle 11.2 is that it works with DRCP connection pooling. There is some fine print about what is and isn't cached, check the Oracle manuals for details.  If you're using 11.1 or non-DRCP "dedicated servers" then make sure you use oci_pconnect() persistent connections.  Also in PHP don't bind strings in the query, although binding as SQLT_INT is OK.

    Read the article

  • Web Development Goes Pre-Visual InterDev

    - by Ken Cox [MVP]
    As a longtime and hardcore ASP.NET webforms developer, I’m finding the new client-side development world a bit of a grind.  I love learning new technologies, but I can’t help feeling we’ve regressed and lost our old RAD advantage as we move heavy lifting to the client. For my latest project, I’m using Telerik’s KendoUI in Visual Studio 2012. To say I feel clumsy writing this much JavaScript is an understatement. It seems like the only safe way to ‘write’ this code is by copying a working snippet from someone else and pasting it into my HTML page.  For me, JavaScript has largely been for small UI tasks like client-side validation and a bit of AJAX – and often emitted by a server-side control. I find myself today lost in nests of curly braces that Ctrl+K, Ctrl+D doesn’t seem to understand that well either. IntelliSense, my old syntax saviour, doesn’t seem to have kept up with this cobweb of code either. Code completion? Not seeing it. As I fumbled about this evening, I thought about how web development rocketed forward when Microsoft introduced Visual InterDev. Its Design-Time Controls (DTCs) changed the way we created sites. All the iterations of Visual Studio have enhanced that server-side experience where you let a tool write the bulk of the code and manually finesse it from there. What happened? Why am I typing  properties and values (especially default values!) into VS 2012 to get a client-side grid on a page? Where are the drag and drop objects that traditionally provided 70 percent of the mark-up and configuration?  Did we forget how to write Property Pages where you enter a value and the correct syntax appears magically in the source code? To me, the tooling was looking the other way as the scene shifted from server-side code to nimble client-side script. It’ll have to catch up. Although JavaScript is the lingua franca of web browsers, the language is unwieldy, tough to maintain, and messy to debug. If a .NET JIT compiler can turn our VB, F#, and C# source code into an Intermediate Language that executes on a computer, I don’t see why there can’t be a client-side compiler that turns a .NET language into JavaScript that browsers can consume.

    Read the article

< Previous Page | 466 467 468 469 470 471 472 473 474 475 476 477  | Next Page >