Search Results

Search found 11268 results on 451 pages for 'shweta simply'.

Page 168/451 | < Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >

  • Demonstrate bad code to client?

    - by jtiger
    I have a new client that has asked me to do a redesign of their website, an ASP.NET Webforms application that was developed by another consultant. It seemed straight-forward (it never is) but I took a look at the code to make sure I knew what I was in for. This application was not written well. At all. It is extremely vulnerable to SQL Injection attacks, business logic is spread throughout the entire application, a lot of duplication, and dead end code that does nothing. On top of that, it keeps throwing exceptions that are being smothered, so it all appears to be running smoothly. My job is to simply update the html and css, but much of the html is being generated in business logic and would be a nightmare for me to sort everything out. My estimates on the redesign were longer than the client was aiming for, and they are asking why so long. How can I explain to my client just how bad this code is? In their mind, the application is running great and the redesign should be a quick one-off. It's my word against the previous consultant, so how can I actually give simple, concrete examples that a non-technical client would understand?

    Read the article

  • Best way to use GIT to maintain web application template

    - by Darren
    I am a sole developer and I have a web application template that I have created in Visual Studio. I am using GIT for source control, but only on my development machine. Presently I have a master and I create branches for new features, merging them back in to the master as I complete the features. I am at a point now where I am ready to use the template for deployments, and of course I want to continue adding new features via branching/merging. My question is: what would be the typical/recommended way for me to create application deployments based on the master? Should I clone the repository into a new directory that is for a particular web application? Or should I also use branching to do project development based on the main project? The projects would never be merged back into the master. However, it would be nice if I could merge future features into the master and have the ability to incorporate them into previously completed projects if desired. For more specific details of my environment: I am using TortoiseGIT in Windows 7, Visual Studio 2012, ASP.NET Web Pages. Obviously the main differences between deployments would simply be differing pages, CSS files and jQuery scripts. I found this post as I was writing this one. In order to do this should I clone the master repository and checkout from it?

    Read the article

  • Access Control and Accessibility in Oracle IRM 11g

    - by martin.abrahams
    A recurring theme you'll find throughout this blog is that IRM needs to balance security with usability and manageability. One of the innovations in Oracle IRM 11g typifies this, as we have introduced a new right that may be included in any role - Accessibility. When creating or modifying a role, you simply select Accessibility along with Open, Print, Edit or whatever rights you want to include in the role. You might, for example, have parallel roles of Reader and Reader with Accessibility and Contributor and Contributor with Accessibility. The effect of the Accessibility right is to relax some of the protection of content in use such that selected users can use accessibility tools. For example, a user with the Accessibility right would be able to use the screen magnification tool, which IRM would ordinarily prevent because it involves screen capture. This new right makes it easy for you to apply security to documents yet, subject to suitable approval processes, cater for the fact that a subset of users might be disproportionately inconvenienced by some of the normal usage constraints. Rather than make those users put up with the restrictions, or perhaps exempt them from using sealed documents altogether, this new right allows you to accommodate them in a controlled manner, and to balance security with corporate accessibility goals.

    Read the article

  • Files backup utility with incremental backups that would keep backup device clean

    - by Wojtek
    I've tested a few of backup utilities and still haven't found one that would satisfy me. Almost every one of them has two options: - full backup - not an option to use frequently - incremental backup - seems right, but there's one thing about it: Incremental backup builds on a base of a full backup, backing up only those files, that were created/changed. The thing is, that after some time you've got a lot of unwanted files from the old backups bloating your backup device. Also, if you'd accidentally delete your full (first) backup file, then the differential backups would be corrupted (you wouldn't be able to restore them). The thing I'm looking for is a program, that would backup files simply by copying them. It would check the backup device whether it contains the file (unchanged): - if yes, it should proceed to the next file (we've got current version backed up) - if no, it would copy the file to the backup device - if the device contains a file that is no longer on our disk, the program would delete it from the backup device Is there any such utility, that would work this way? If not, do you have any hints on how to backup fairly big amounts of data (around 20gb) quite frequently with incremental backups and not be exposed to those unwanted effects of backup size puffing up?

    Read the article

  • How to use shared_ptr for COM interface pointers

    - by Seefer
    I've been reading about various usage advice relating to the new c++ standard smart pointers unique_ptr, shared_ptr and weak_ptr and generally 'grok' what they are about when I'm writing my own code that declares and consumes them. However, all the discussions I've read seem restricted to this simple usage situation where the programmer is using smart in his/her own code, with no real discussion on techniques when having to work with libraries that expect raw pointers or other types of 'smart pointers' such as COM interface pointers. Specifically I'm learning my way through C++ by attempting to get a standard Win32 real-time game loop up and running that uses Direct2D & DirectWrite to render text to the display showing frames per second. My first task with Direct2D is in creating a Direct2D Factory object with the following code from the Direct2D examples on MSDN: ID2D1Factory* pD2DFactory = nullptr; HRESULT hr = D2D1CreateFactory(D2D1_FACTORY_TYPE_SINGLE_THREADED, &pD2DFactory); pD2DFactory is obviously an 'out' parameter and it's here where I become uncertain how to make use of smart pointers in this context, if indeed it's possible. My inexperienced C++ mind tells me I have two problems: With pD2DFactory being a COM interface pointer type, how would smart_ptr work with the Add() / Release() member functions for a COM object instance? Are smart pointers able to be passed to functions in situations where the function is using an 'out' pointer parameter technique? I did experiment with the alternative of using _com_ptr_t in the comip.h header file to help with pointer lifetime management and declared the pD2DFactory pointer with the following code: _com_ptr_t<_com_IIID<pD2DFactory, &__uuidof(pD2DFactory)>> pD2DFactory = nullptr; and it appears to work so far but, as you can see, the syntax is cumbersome :) So, I was wondering if any C++ gurus here could confirm whether smart pointers are able to help in cases like this and provide examples of usage, or point me to more in-depth discussions of smart pointer usage when needing to work with other code libraries that know nothing of them. Or is it simply a case of my trying to use the wrong tool for the job? :)

    Read the article

  • Guide.BeginShowMessageBox wrapper

    - by Daniel Moth
    While coding for Windows Phone 7 using Silverlight, I was really disappointed with the built-in MessageBox class, so I found an alternative. My disappointment was the fact that: Display of the messagebox causes the phone to vibrate (!) Display of the messagebox causes the phone to make an annoying sound. You can only have "ok" and "cancel" buttons (no other button captions). I was using the messagebox something like this: // Produces unwanted sound and vibration. // ...plus no customization of button captions. if (MessageBox.Show("my message", "my caption", MessageBoxButton.OKCancel) == MessageBoxResult.OK) { // Do something Debug.WriteLine("OK"); } …and wanted to make minimal changes throughout my code to change it to this: // no sound or vibration // ...plus bonus of customizing button captions if (MyMessageBox.Show("my message", "my caption", "ok, got it", "that sucks") == MyMessageBoxResult.Button1) { // Do something Debug.WriteLine("OK"); } It turns out there is a much more powerful class in the XNA framework that delivered on my requirements (and offers even more features that I didn't need like choice of sounds and not blocking the caller): Guide.BeginShowMessageBox. You can use it simply by adding an assembly reference to Microsoft.Xna.Framework.GamerServices. I wrote a little wrapper for my needs and you can find it here (ready to enhance with your needs): MyMessageBox.cs.txt. Comments about this post welcome at the original blog.

    Read the article

  • Oracle Retail Industry Forum Europe 2014 – Registration Now Open!

    - by Marie-Christin Hansen-Oracle
    We are delighted to announce that registration for the 4th annual Oracle Retail Industry Forum Europe (ORIF Europe) is now open. The event is being held from 10-11 September at the Renaissance St Pancras Hotel in London. ORIF Europe is a must attend event for Oracle Retail customers, retailers who are about to embark on an Oracle implementation, or for those who simply wish to learn more about Oracle Retail solutions and how they support the provision of commerce anywhere. Further details will be announced over the coming weeks, but already confirmed as speakers are: Paul Hornby, Head of eCommerce at Shop Direct, who will discuss the company’s ambitions, challenges faced and the strategy undertaken by the team in driving the business from a catalogue-based to a web-based commerce business. The session will reveal how Shop Direct and Oracle Retail are working together to achieve the transformation of this business into a world-class digital retailer, by building a foundation for future growth for each of its individual brands and target markets. Kate Ancketill, CEO and Founder of GDR Creative Intelligence who will illustrate what best-in-market 'Access Anywhere' retail looks like. From individual retail and next generation personalisation of in-store service, to the land grab for delivery innovation, cutting edge brands are 'training' consumers to check into stores in exchange for concrete benefits. Kate will explore the opportunity this is opening up across the retail landscape. Register for the Oracle Retail Industry Forum today to secure your place.

    Read the article

  • Inconsistent movement / line-of-sight around obstacles on a hexagonal grid

    - by Darq
    In a roguelike game I've been working on, one of my core design goals has been to allow the player to "Play the game, not the grid." In essence, I want the player's positioning to be tactical because of elements in the game world, not simply because some grid tiles are more advantageous than others, in relation to enemies. I am fine with world geometry not being realistic, but it needs to be consistent. In this process I have ran into most of the common problems (Square tiles? Diagonal movement, LOS, corner cases, etc.) and have moved to a hexagonal tile grid. For the most part this has been great, and I've not had too many inconsistencies. Recently however I have been stumped by the following: Points A and B are both distance 4 from the player (red lines). Line-of-sight to both are blocked by walls (black tiles). However, due to the hexagonal grid, A can be reached in 4 moves, whereas B requires 5 moves (blue lines). On a hex grid, "shortest path" seems divorced from "direct path", there may be multiple shortest paths to any point, but there is only one direct path (or two in some situations). This is fine, geometry need not be realistic. However this also seems inconsistent, similar obstacles are more effective in some positions than in others. A player running away from an enemy should be able to run in any direction, increasing the distance between the two actors. However when placing obstacles or traps between themselves and enemies, the player is best served by running in one of the six directions that don't have multiple shortest paths. Is there a way to rationalise this? Am I missing something that makes this behaviour consistent? Or is there a way to make this behaviour consistent? I am most certainly over-thinking this, but as it is one of my goals, I should do it due diligence.

    Read the article

  • Is there a canonical book for learning Java as an experienced developer?

    - by Steven Elliott Jr
    I have been a .NET developer now for about the past 5/6 years give or take. I have never done any professional Java development and the last time I really touched it was probably back in college. I have been toying with the Scala language a little bit but nothing serious. Recently, I've been offered an opportunity to do some pretty cool work, but using Java instead of .NET. I think I can get by alright with my current skill set, meaning I already know how to program well and am familiar with languages such as C# and C++, etc. So, the syntax and all that language stuff are really not a problem. What I need is a really good reference book and a book about how to think in Java. Each language/Framework/Stack tries to address things a certain way and I'm sure Java is no different. What are some great Java books that you simply can't live without? Are there any books that talk about the most important parts of Java that must be understood before all else? As a side note, I will be doing mostly Java web development. Not really 100% on what types of stuff they are using for persistence, framework, server, etc.

    Read the article

  • Is there a portal dedicated to HTML5 games?

    - by Bane
    Just to get something straight; by "portal", I mean a website that frequently publishes a certain type of games, has a blog, some articles, maybe some tutorials and so on. All of these things are not required (except the game publishing part, of course), for example, I consider Miniclip to be a flash game portal. The reason for defining this term is because I'm not sure if other people use it in this context. I recently (less than a year ago) got into HTML5 game development, nothing serious, just my own small projects that I didn't really show to a lot of people, and that certainly didn't end up somewhere on the web (although, I am planning to make a website for my next game). I am interested in the existence of an online portal where indie devs (or non-indie ones, doesn't really matter that much) can publish their own games, sort of like "by devs for devs", also a place where you can find some simple tutorials on basic HTML5 game development and so on... I doubt something like this exists for several reasons: You can't really commercialize an HTML5 game without a strong server-side and microtransactions The code can be easily copied HTML5 is simply new, and things need time to get their own portals somewhere... If a thing like this does not exist, I think I might get into making one some day...

    Read the article

  • Looking for some OO design advice

    - by Andrew Stephens
    I'm developing an app that will be used to open and close valves in an industrial environment, and was thinking of something simple like this:- public static void ValveController { public static void OpenValve(string valveName) { // Implementation to open the valve } public static void CloseValve(string valveName) { // Implementation to close the valve } } (The implementation would write a few bytes of data to the serial port to control the valve - an "address" derived from the valve name, and either a "1" or "0" to open or close the valve). Another dev asked whether we should instead create a separate class for each physical valve, of which there are dozens. I agree it would be nicer to write code like PlasmaValve.Open() rather than ValveController.OpenValve("plasma"), but is this overkill? Also, I was wondering how best to tackle the design with a couple of hypothetical future requirements in mind:- We are asked to support a new type of valve requiring different values to open and close it (not 0 and 1). We are asked to support a valve that can be set to any position from 0-100, rather than simply "open" or "closed". Normally I would use inheritance for this kind of thing, but I've recently started to get my head around "composition over inheritance" and wonder if there is a slicker solution to be had using composition?

    Read the article

  • Sql Server Prevent Saving Changes That Require Table to be Re-created

    When working with SQL Server Management studio, if you use the Design view of a table and attempt to make a change that will require the table to be dropped and re-added, you may receive an error message like this one: Saving changes is not permitted.  The changes you have made require the following tables to be dropped and re-created.  You have either made changes to a table that cant be re-created or enabled the option Prevent saving changes that require the table to be re-created. In truth, its quite likely that you didnt enable such an option, despite the error dialogs accusations, as it is enabled by default when you install SQL Management Studio.  You can learn more about the issue in the KB article, Error message when you try to save a table in SQL Server 2008: Saving changes is not permitted. Warning: As the above article states, it is not recommended that you turn off this option (at least not permanently), as it will ensure that you do not accidentally change the schema of a table such that data is lost.  Do so at your peril. The simplest way to bypass this error is to go into Option Designers and uncheck the option Prevent saving changes that require table re-creation.  See the screenshot below. The main reason why you will see this error is if you attempted to do any of the following to the table whose design you are saving: Change the Allow Nulls setting for a column Reorder columns Change any columns data type Add a new column The recommended workaround is to script out the changes to a SQL file and execute them by hand, or to simply write out your own T-SQL to make the changes. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Execute a SSIS package in Sync or Async mode from SQL Server 2012

    - by Davide Mauri
    Today I had to schedule a package stored in the shiny new SSIS Catalog store that can be enabled with SQL Server 2012. (http://msdn.microsoft.com/en-us/library/hh479588(v=SQL.110).aspx) Once your packages are stored here, they will be executed using the new stored procedures created for this purpose. This is the script that will get executed if you try to execute your packages right from management studio or through a SQL Server Agent job, will be similar to the following: Declare @execution_id bigint EXEC [SSISDB].[catalog].[create_execution] @package_name='my_package.dtsx', @execution_id=@execution_id OUTPUT, @folder_name=N'BI', @project_name=N'DWH', @use32bitruntime=False, @reference_id=Null Select @execution_id DECLARE @var0 smallint = 1 EXEC [SSISDB].[catalog].[set_execution_parameter_value] @execution_id,  @object_type=50, @parameter_name=N'LOGGING_LEVEL', @parameter_value=@var0 DECLARE @var1 bit = 0 EXEC [SSISDB].[catalog].[set_execution_parameter_value] @execution_id,  @object_type=50, @parameter_name=N'DUMP_ON_ERROR', @parameter_value=@var1 EXEC [SSISDB].[catalog].[start_execution] @execution_id GO The problem here is that the procedure will simply start the execution of the package and will return as soon as the package as been started…thus giving you the opportunity to execute packages asynchrously from your T-SQL code. This is just *great*, but what happens if I what to execute a package and WAIT for it to finish (and thus having a synchronous execution of it)? You have to be sure that you add the “SYNCHRONIZED” parameter to the package execution. Before the start_execution procedure: exec [SSISDB].[catalog].[set_execution_parameter_value] @execution_id,  @object_type=50, @parameter_name=N'SYNCHRONIZED', @parameter_value=1 And that’s it . PS From the RC0, the SYNCHRONIZED parameter is automatically added each time you schedule a package execution through the SQL Server Agent. If you’re using an external scheduler, just keep this post in mind .

    Read the article

  • What electronic user-story-mapping tools can you recommend?

    - by azheglov
    Agile software development relies heavily on a work item type called user stories. For example, you have a backlog full of user stories and you can select a few of them to work on during the next sprint. But where and how do you find user stories to put into the backlog? There is a popular technique for doing that called story mapping. Jeff Patton invented it and here is the definitive guide on how to do it. The question is, what electronic tools are out there that support Patton's story-mapping technique? I've done a bit of research, found Pivotal and Rally plug-ins (but I'm not a customer of either) and I'm currently experimenting with SilverStories. What other tools are out there? What have you used? What do you (not) recommend? Why? UPDATE: Some people who wrote comments seem to lean towards an answer that applying this technique is simply impossible with an electronic tool and we should just accept that. Can't someone write it up as an answer?

    Read the article

  • WolframAlpha Can Now Do In-depth Analysis of Your Facebook Account

    - by Jason Fitzpatrick
    If you’re a big fan of WolframAlpha’s ability to crunch the numbers on just about anything–and we certainly are–you’ll likely be just as delighted as we were to watch it massage the data from your Facebook account. Find out your most liked, discussed, and shared posts, see your Facebook habits, and other neat trends. I unleashed it on my account this morning, not sure what to expect from the results. Within the results tabulation WolframAlpha provided me with all sorts of neat data break downs. I now know exactly how many days it is to my next birthday, the composition of my aggregate posting habits (how many posts are status updates, links, or photos), the time of day when I do the most posting (and what the composition of those posts is), and my average post length. I also know my most liked post and my most commented on post. It will even crunch the numbers on your network of friends (60.6% of my friends are married, for example). By far one of the more interesting data analysis it does on the friendship data, however, is organizing all your friends into relationship clusters so you can see who in your Facebook network is friends with other people in your Facebook network. The service from WolframAlpha is free: simply visit the WolframAlpha search portal and type in “Facebook report” to start the process. You’ll be prompted to create a WolframAlpha account if you don’t have one and to authorize the WolframAlpha Facebook app to access your data. Your Facebook data is cached to your WolframAlpha account for one hour in order to crunch the numbers and display the results. WolframAlpha HTG Explains: How Windows Uses The Task Scheduler for System Tasks HTG Explains: Why Do Hard Drives Show the Wrong Capacity in Windows? Java is Insecure and Awful, It’s Time to Disable It, and Here’s How

    Read the article

  • Ubuntu not shutting down ( going to black screen ) 12.04

    - by Orrin Fox
    I am currently using a USB persistent install of ubuntu. its a simple 4GB drive with a 2.8GB partition ( casper-rw storage partition ). I setup an administrator account and set it to login automatically. I also removed ubiquity to simply use this as a go anywhere install. Heres my issue. Im logged in as my account, and I click the top right gear and select "shut down". Text pops up showing its quitting processes.. etc. and then goes to the plymouth animation. But... The screen goes black, and then it goes to the login screen. Now when im at the login screen i go into terminal ( alt+F2 ) and dont you know, im logged in as Ubuntu. so then I try the following: ubuntu@ubuntu:~$ sudo shutdown now It goes to the plymouth screen again as if its shutting down, AND the screen goes black once again but the computer has not turned off, as in the usb is still flashing the light, the fans are still on, the only thing off is the screen. Is this a bug? If not maybe i did something wrong? Perhaps its that I made an account but... if there is a work around for this please let me know. Thanks again, Fox

    Read the article

  • What is an elegant way to install non-repository software in 12.04?

    - by Tomas
    Perhaps I missed something when Canonical removed the "Create launcher" option from the right click menu, because I've really been missing that little guy. For me, it was the preferred way to install software that comes not in a .deb, but in a tar.gz, for example. (Note: in that tar.gz I have a folder with the compiled files, I'm NOT compiling from source) I just downloaded the new Eclipse IDE and extracted the tar.gz to my /usr folder. Now, I'd like to add it to my desktop and dash so it can be started easily. Intuitively I would right click the desktop and create a launcher. After this I'd copy the .desktop to /usr/share/applications. However, creating a launcher is not possible. My question: How would you install an already compiled tar.gz that you have downloaded from the internet? Below are a few things I've seen, but these are all more time-consuming than the right click option. If you have any better ideas, please let me know. Thanks! Manual copy & create a .desktop file: manually Simply extract the archive to /usr. Create a new text file, adding something along the lines of the code block below: [Desktop Entry] Version=1.0 Type=Application Terminal=false Exec="/usr/local/eclipse42/eclipse" Name="Eclipse 4.2" Icon=/home/tomas/icons/eclipse.svg Rename this file to eclipse42.desktop and make it executable. Then copy this to /usr/share/applications. Manually copy & create a .desktop file: GUI fossfreedom has elaborated on this in How can I create launchers on my desktop? Basically it involves the command: gnome-desktop-item-edit --create-new ~/Desktop After creating the launcher, copy it to /usr/share/applications.

    Read the article

  • LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology

    - by Eric Z Goodnight
    With image technology progressing faster than ever, High-Def has become the standard, giving TV buyers more options at cheaper prices. But what’s different in all these confusing TVs, and what should you know before buying one? If you’re considering buying a television this Holiday season for a loved one (or simply for yourself), it can be a big help to know what to look for. Take a look to find out what sets HD televisions apart, learn some of the confusing jargon associated with them, and see a comparison of four of the types of HDTVs commonly sold today. Latest Features How-To Geek ETC The How-To Geek Guide to Learning Photoshop, Part 8: Filters Get the Complete Android Guide eBook for Only 99 Cents [Update: Expired] Improve Digital Photography by Calibrating Your Monitor The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography How to Choose What to Back Up on Your Linux Home Server How To Harmonize Your Dual-Boot Setup for Windows and Ubuntu Hang in There Scrat! – Ice Age Wallpaper How Do You Know When You’ve Passed Geek and Headed to Nerd? On The Tip – A Lamborghini Theme for Chrome and Iron What if Wile E. Coyote and the Road Runner were Human? [Video] Peaceful Winter Cabin Wallpaper Store Tabs for Later Viewing in Opera with Tab Vault

    Read the article

  • sublime text 2 review !

    - by Anirudha
    Few months ago I am looking for a editor to doing simply edit on html,css,javascript.  Before it I tried notepad++ which is quite awesome to do all works I want to get done with him.  I choose 2 editor on my list. first is sublime text and second is phpstorm. both are cross-plateform. I tried both and both is working fine. I finally go with sublime text 2.   Here is the reason why sublime text 2 is awesome. phpstorm and sublime text 2 both is licensed  software. In sublime text 2 you can use it for unlimited time when phpstorm is available for 30 days only. Sublime text 2 is very memory efficient and lightweight software this is first thing people found best in sublime text 2. in phpstorm problem for me is sometime it’s goes unresponsive when I tried html5 boilerplate. sublime text 2 is never hang depend on memory size of project compare to phpstorm. in Sublime text 2 you can got better speed at coding after learning some shortcut and basic thing applied specially sublime text 2. Sublime text 2 come with distraction free mode when phpstorm have nothing with full-screen. Sublime text 2 support almost every language. I have seen many people in community who has move from their PHP IDE to sublime text 2. You can use LESS and coffeescript in it. There are many kind of customization out in github regarding sublime text 2.   In past I also have also tried webmatrix. the latest version of webmatrix have nothing good as sublime text 2. Sublime text 2 is best fit for my requirement.   So cheers, people should tried once Sublime text 2 if they are look for a solid tool for learning new things. sublime text 2 can be downloaded from http://www.sublimetext.com/.   Thanks for reading my post.

    Read the article

  • Is it just me or is this a baffling tech interview question

    - by Matthew Patrick Cashatt
    Background I was just asked in a tech interview to write an algorithm to traverse an "object" (notice the quotes) where A is equal to B and B is equal to C and A is equal to C. That's it. That is all the information I was given. I asked the interviewer what the goal was but apparently there wasn't one, just "traverse" the "object". I don't know about anyone else, but this seems like a silly question to me. I asked again, "am I searching for a value?". Nope. Just "traverse" it. Why would I ever want to endlessly loop through this "object"?? To melt my processor maybe?? The answer according to the interviewer was that I should have written a recursive function. OK, so why not simply ask me to write a recursive function? And who would write a recursive function that never ends? My question: Is this a valid question to the rest of you and, if so, can you provide a hint as to what I might be missing? Perhaps I am thinking too hard about solving real world problems. I have been successfully coding for a long time but this tech interview process makes me feel like I don't know anything. Final Answer: CLOWN TRAVERSAL!!! (See @Matt's answer below) Thanks! Matt

    Read the article

  • array and array_view from amp.h

    - by Daniel Moth
    This is a very long post, but it also covers what are probably the classes (well, array_view at least) that you will use the most with C++ AMP, so I hope you enjoy it! Overview The concurrency::array and concurrency::array_view template classes represent multi-dimensional data of type T, of N dimensions, specified at compile time (and you can later access the number of dimensions via the rank property). If N is not specified, it is assumed that it is 1 (i.e. single-dimensional case). They are rectangular (not jagged). The difference between them is that array is a container of data, whereas array_view is a wrapper of a container of data. So in that respect, array behaves like an STL container, whereas the closest thing an array_view behaves like is an STL iterator (albeit with random access and allowing you to view more than one element at a time!). The data in the array (whether provided at creation time or added later) resides on an accelerator (which is specified at creation time either explicitly by the developer, or set to the default accelerator at creation time by the runtime) and is laid out contiguously in memory. The data provided to the array_view is not stored by/in the array_view, because the array_view is simply a view over the real source (which can reside on the CPU or other accelerator). The underlying data is copied on demand to wherever the array_view is accessed. Elements which differ by one in the least significant dimension of the array_view are adjacent in memory. array objects must be captured by reference into the lambda you pass to the parallel_for_each call, whereas array_view objects must be captured by value (into the lambda you pass to the parallel_for_each call). Creating array and array_view objects and relevant properties You can create array_view objects from other array_view objects of the same rank and element type (shallow copy, also possible via assignment operator) so they point to the same underlying data, and you can also create array_view objects over array objects of the same rank and element type e.g.   array_view<int,3> a(b); // b can be another array or array_view of ints with rank=3 Note: Unlike the constructors above which can be called anywhere, the ones in the rest of this section can only be called from CPU code. You can create array objects from other array objects of the same rank and element type (copy and move constructors) and from other array_view objects, e.g.   array<float,2> a(b); // b can be another array or array_view of floats with rank=2 To create an array from scratch, you need to at least specify an extent object, e.g. array<int,3> a(myExtent);. Note that instead of an explicit extent object, there are convenience overloads when N<=3 so you can specify 1-, 2-, 3- integers (dependent on the array's rank) and thus have the extent created for you under the covers. At any point, you can access the array's extent thought the extent property. The exact same thing applies to array_view (extent as constructor parameters, incl. convenience overloads, and property). While passing only an extent object to create an array is enough (it means that the array will be written to later), it is not enough for the array_view case which must always wrap over some other container (on which it relies for storage space and actual content). So in addition to the extent object (that describes the shape you'd like to be viewing/accessing that data through), to create an array_view from another container (e.g. std::vector) you must pass in the container itself (which must expose .data() and a .size() methods, e.g. like std::array does), e.g.   array_view<int,2> aaa(myExtent, myContainerOfInts); Similarly, you can create an array_view from a raw pointer of data plus an extent object. Back to the array case, to optionally initialize the array with data, you can pass an iterator pointing to the start (and optionally one pointing to the end of the source container) e.g.   array<double,1> a(5, myVector.begin(), myVector.end()); We saw that arrays are bound to an accelerator at creation time, so in case you don’t want the C++ AMP runtime to assign the array to the default accelerator, all array constructors have overloads that let you pass an accelerator_view object, which you can later access via the accelerator_view property. Note that at the point of initializing an array with data, a synchronous copy of the data takes place to the accelerator, and then to copy any data back we'll see that an explicit copy call is required. This does not happen with the array_view where copying is on demand... refresh and synchronize on array_view Note that in the previous section on constructors, unlike the array case, there was no overload that accepted an accelerator_view for array_view. That is because the array_view is simply a wrapper, so the allocation of the data has already taken place before you created the array_view. When you capture an array_view variable in your call to parallel_for_each, the copy of data between the non-CPU accelerator and the CPU takes place on demand (i.e. it is implicit, versus the explicit copy that has to happen with the array). There are some subtleties to the on-demand-copying that we cover next. The assumption when using an array_view is that you will continue to access the data through the array_view, and not through the original underlying source, e.g. the pointer to the data that you passed to the array_view's constructor. So if you modify the data through the array_view on the GPU, the original pointer on the CPU will not "know" that, unless one of two things happen: you access the data through the array_view on the CPU side, i.e. using indexing that we cover below you explicitly call the array_view's synchronize method on the CPU (this also gets called in the array_view's destructor for you) Conversely, if you make a change to the underlying data through the original source (e.g. the pointer), the array_view will not "know" about those changes, unless you call its refresh method. Finally, note that if you create an array_view of const T, then the data is copied to the accelerator on demand, but it does not get copied back, e.g.   array_view<const double, 5> myArrView(…); // myArrView will not get copied back from GPU There is also a similar mechanism to achieve the reverse, i.e. not to copy the data of an array_view to the GPU. copy_to, data, and global copy/copy_async functions Both array and array_view expose two copy_to overloads that allow copying them to another array, or to another array_view, and these operations can also be achieved with assignment (via the = operator overloads). Also both array and array_view expose a data method, to get a raw pointer to the underlying data of the array or array_view, e.g. float* f = myArr.data();. Note that for array_view, this only works when the rank is equal to 1, due to the data only being contiguous in one dimension as covered in the overview section. Finally, there are a bunch of global concurrency::copy functions returning void (and corresponding concurrency::copy_async functions returning a future) that allow copying between arrays and array_views and iterators etc. Just browse intellisense or amp.h directly for the full set. Note that for array, all copying described throughout this post is deep copying, as per other STL container expectations. You can never have two arrays point to the same data. indexing into array and array_view plus projection Reading or writing data elements of an array is only legal when the code executes on the same accelerator as where the array was bound to. In the array_view case, you can read/write on any accelerator, not just the one where the original data resides, and the data gets copied for you on demand. In both cases, the way you read and write individual elements is via indexing as described next. To access (or set the value of) an element, you can index into it by passing it an index object via the subscript operator. Furthermore, if the rank is 3 or less, you can use the function ( ) operator to pass integer values instead of having to use an index object. e.g. array<float,2> arr(someExtent, someIterator); //or array_view<float,2> arr(someExtent, someContainer); index<2> idx(5,4); float f1 = arr[idx]; float f2 = arr(5,4); //f2 ==f1 //and the reverse for assigning, e.g. arr(idx[0], 7) = 6.9; Note that for both array and array_view, regardless of rank, you can also pass a single integer to the subscript operator which results in a projection of the data, and (for both array and array_view) you get back an array_view of rank N-1 (or if the rank was 1, you get back just the element at that location). Not Covered In this already very long post, I am not going to cover three very cool methods (and related overloads) that both array and array_view expose: view_as, section, reinterpret_as. We'll revisit those at some point in the future, probably on the team blog. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Performance issues with visibility detection and object transparency

    - by maul
    I'm working on a 3d game that has a view similar to classic isometric games (diablo, etc.). One of the things I'm trying to implement is the effect of turning walls transparent when the player walks behind them. By itself this is not a huge issue, but I'm having trouble determining which walls should be transparent exactly. I can't use a circle or square mask. There are a lot of cases where the wall piece at the same (relative) position has different visibility depending on the surrounding area. With the help of a friend I came up with this algorithm: Create a grid around the player that contains a lot of "visibility points" (my game is semi tile-based so I create one point for every tile on the grid) - the size of the square's side is close to the radius where I make objects transparent. I found 6x6 to be a good value, so that's 36 visibility points total. For every visibility point on the grid, check if that point is in the player's line of sight. For every visibility point that is in the LOS, cast a ray from the camera to that point and mark all objects the ray hits as transparent. This algorithm works - not perfectly, but only requires some tuning - however this is very slow. As you can see, it requries 36 ray casts minimum, but most of the time 60-70 depending on the position. That's simply too much for the CPU. Is there a better way to do this? I'm using Unity 3D but I'm not looking for an engine-specific solution.

    Read the article

  • SSDT gotcha – Moving a file erases code analysis suppressions

    - by jamiet
    I discovered a little wrinkle in SSDT today that is worth knowing about if you are managing your database schemas using SSDT. In short, if a file is moved to a different folder in the project then any code analysis suppressions that reference that file will disappear from the suppression file. This makes sense if you think about it because the paths stored in the suppression file are no longer valid, but you probably won’t be aware of it until it happens to you. If you don’t know what code analysis is or you don’t know what the suppression file is then you can probably stop reading now, otherwise read on for a simple short demo. Let’s create a new project and add a stored procedure to it called sp_dummy. Naming stored procedures with a sp_ prefix is generally frowned upon and hence SSDT static code analysis will look for occurrences of this and flag them. So, the next thing we need to do is turn on static code analysis in the project properties: A subsequent build causes a code analysis warning as we might expect: Let’s suppose we actually don’t mind stored procedures with sp_ prefixes, we can just right-click on the message to suppress and get rid of it: That causes a suppression file to get created in our project: Notice that the suppression file contains a relative path to the file that has had the suppression placed upon it. Now if we simply move the file within our project to a new folder notice that the suppression that we just created gets removed from the suppression file: As I alluded above this behaviour is intuitive because the path originally stored in the suppression file is no longer relevant but you’re probably not going to be aware of it until it happens to you and messages that you thought you had suppressed start appearing again. Definitely one to be aware of. @Jamiet   

    Read the article

  • "Failed to create swap space" error during installation

    - by Welsh Heron
    I've been trying to install Ubuntu for the past two days or so, but I've been running into a problem: every time I run the installation program on the LiveCD, I always get the same (or a very similar) error: "Failed to create Swap space The creation of swap space in partition #3 of SCSI5 (0,0,0)(sda) failed." So far, I've run DBAN (Darik's Boot and Nuke) on my HDD once, to make absolutely sure that everything on it had been erased. Then, I simply put in the LiveCD, and let it run the automated install. I get the above error directly after I tell it to automatically partition the HDD (it will work for a second or so, then this will pop up), forcing me back to the screen that lets me choose whether I want to automatically or manually partition the HDD. Well, after failing to install the software manually, I did a little research and learned enough about partitioning Linux to use the 'Manual partitioning' option. I partitioned the HDD as follows (it's a 1TB drive): /home - (the rest)- ext2, / - 20GB - ext2, /boot - 100MB - ext2, /swap - 8GB /EFIboot - 40MB The only difference when I tried this method was that I got THIS message: "Failed to create Swap space The creation of swap space in partition #2 of SCSI5 (0,0,0)(sda) failed." Basically, the only difference was that there was now a '2' instead of a '3'. If I may ask, what exactly am I doing wrong? I've tried looking around the internet (that's basically all I've done for the last two days), but no one seems to have the same problem that I have, and I've tried most of the solutions for similar problems (DBAN, formatting partitions in ext2 format, etc). The only thing I haven't tried is using the terminal to manually partition the HDD...and I actually DID try to do this, but I wasn't able to get past 'su' 's password demand, so I wasn't able to use the terminal. Thank you for your help in advance. ~Welsh

    Read the article

  • Hybrid Graphics on Windows 7/Ubuntu 12.04 Dual Boot

    - by Noob.
    Alright, so here's the situation: I am using an ASUS UL80VT with two graphics cards: Integrated intel graphics and NVIDIA G210M I was running an Ubuntu 12.04 - Windows 7 dual boot (on separate partitions).The machine worked perfectly (including the display drivers) without me needing to install anything special or change any settings. However, my hard drive was corrupted and I lost all my data yesterday, so after it was replaced, I installed Ubuntu 12.04 64x again after installing Windows 7. I booted up Ubuntu after installation, and noticed it was by default using Unity 2D... Gnome 3.4 wasn't working properly either, so I guessed that the NVIDIA G210M driver wasn't installed/working and the OS was instead using the integrated graphics. I checked the "Additional Drivers" thing, but there were no proprietary drivers listed there, so I went to the NVIDIA website, downloaded the driver directly and installed it. I restarted, but there was no change. After this, I read somewhere that I should change my SATA in the BIOS to "Compatible" rather than "Enhanced". This worked fine and fixed the problem (both Unity and Gnome were working perfectly) but then when I tried booting up Windows 7, I recieved the BSOD. So I changed it back to Enhanced, and once again, the NVIDIA 210M graphics isn't working on Ubuntu, but on Windows 7 it is. I do not want to keep changing from Enhanced to Compatible every time I reboot to Ubuntu and neither do I want to simply just use one OS. Note that NVIDIA 210M and integrated graphics work perfectly on Windows 7. Also, I don't care about switching between them, I just want to be able to use the NVIDIA one. What can I do so that both Windows 7 and Ubuntu work and NVIDIA G210M works on Ubuntu?

    Read the article

< Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >