Search Results

Search found 27050 results on 1082 pages for 'project'.

Page 686/1082 | < Previous Page | 682 683 684 685 686 687 688 689 690 691 692 693  | Next Page >

  • Understanding Visitor Pattern

    - by Nezreli
    I have a hierarchy of classes that represents GUI controls. Something like this: Control-ContainerControl-Form I have to implement a series of algoritms that work with objects doing various stuff and I'm thinking that Visitor pattern would be the cleanest solution. Let take for example an algorithm which creates a Xml representaion of a hierarchy of objects. Using 'classic' approach I would do this: public abstract class Control { public virtual XmlElement ToXML(XmlDocument document) { XmlElement xml = document.CreateElement(this.GetType().Name); // Create element, fill it with attributes declared with control return xml; } } public abstract class ContainerControl : Control { public override XmlElement ToXML(XmlDocument document) { XmlElement xml = base.ToXML(document); // Use forech to fill XmlElement with child XmlElements return xml; } } public class Form : ContainerControl { public override XmlElement ToXML(XmlDocument document) { XmlElement xml = base.ToXML(document); // Fill remaining elements declared in Form class return xml; } } But I'm not sure how to do this with visitor pattern. This is the basic implementation: public class ToXmlVisitor : IVisitor { public void Visit(Form form) { } } Since even the abstract classes help with implementation I'm not sure how to do that properly in ToXmlVisitor. Perhaps there is a better solution to this problem. The reason that I'm considering Visitor pattern is that some algorithms will need references not available in project where the classes are implemented and there is a number of different algorithms so I'm avoiding large classes. Any thoughts are welcome.

    Read the article

  • Transaction classification. Artificial intelligence

    - by Alex
    For a project, I have to classify a list of banking transactions based on their description. Supose I have 2 categories: health and entertainment. Initially, the transactions will have basic information: date and time, ammount and a description given by the user. For example: Transaction 1: 09/17/2012 12:23:02 pm - 45.32$ - "medicine payments" Transaction 2: 09/18/2012 1:56:54 pm - 8.99$ - "movie ticket" Transaction 3: 09/18/2012 7:46:37 pm - 299.45$ - "dentist appointment" Transaction 4: 09/19/2012 6:50:17 am - 45.32$ - "videogame shopping" The idea is to use that description to classify the transaction. 1 and 3 would go to "health" category while 2 and 4 would go to "entertainment". I want to use the google prediction API to do this. In reality, I have 7 different categories, and for each one, a lot of key words related to that category. I would use some for training and some for testing. Is this even possible? I mean, to determine the category given a few words? Plus, the number of words is not necesarally the same on every transaction. Thanks for any help or guidance! Very appreciated Possible solution: https://developers.google.com/prediction/docs/hello_world?hl=es#theproblem

    Read the article

  • User Acceptance Testing Defect Classification when developing for an outside client

    - by DannyC
    I am involved in a large development project in which we (a very small start up) are developing for an outside client (a very large company). We recently received their first output from UAT testing of a fairly small iteration, which listed 12 'defects', triaged into three categories : Low, Medium and High. The issue we have is around whether everything in this list should be recorded as a 'defect' - some of the issues they found would be better described as refinements, or even 'nice-to-haves', and some we think are not defects at all. They client's QA lead says that it is standard for them to label every issues they identify as a defect, however, we are a bit uncomfortable about this. Whilst the relationship is good, we don't see a huge problem with this, but we are concerned that, if the relationship suffers in the future, these lists of 'defects' could prove costly for us. We don't want to come across as being difficult, or taking things too personally here, and we are happy to make all of the changes identified, however we are a bit concerned especially as there is a uneven power balance at play in our relationship. Are we being paranoid here? Or could we be setting ourselves up for problems down the line by agreeing to this classification?

    Read the article

  • Automate #include refactoring in C++ [on hold]

    - by Mikhail
    I have a big project with hundreds of files. And as it often happens to C++ projects, #include directives are in messed up. I want to refactor them to increase clarity, decrease compilation time and simplify analysis. For each .h file I want to make sure that: It have #include directives only for types it is using But it have only forward declarations of types that are used as T* or T& For each .cpp file I want to make sure that: It have #include directives only for types it is using and not already included by another headers (no indirect includes when possible) I'm looking for a tool which will help me to automate this refactoring. For now I only know of tools that helps to remove redundant includes, they are many: PC-lint include-what-you-use cppclean ProFactor IncludeManager But I know of no tools to help me to move necessary includes in .h files or replace includes with forward declarations. Any ideas? Tools for Windows and Visual Studio are preferred. Update. Considered to be off-topic. Please, follow the link on Software Recommendations http://softwarerecs.stackexchange.com/q/4461/3331

    Read the article

  • Exporting PowerPoint Slides with Specific Heights and Widths

    - by Damon Armstrong
    I found myself in need of exporting PowerPoint slides from a presentation and was fairly excited when I found that you could save them off in standard image formats. The problem is that Microsoft conveniently exports all images with a resolution of 960 x 720 pixels, which is not the resolution I wanted.  You can, however, specify the resolution if you are willing to put a macro into your project: Sub ExportSlides()   For i = 1 To ActiveWindow.Selection.SlideRange.Count     Dim fileName As String     If (i < 10) Then       fileName = "C:\PowerPoint Export\Slide" & i & ".png"     Else       fileName = "C:\PowerPoint Export\Slide0" & i & ".png"     End If     ActiveWindow.Selection.SlideRange(i).Export fileName, "PNG", 1280, 720   Next End Sub When you call the Export method you can specify the file type as well as the dimensions to use when creating the image.  If the macro approach is not your thing, then you can also modify the default settings through the registry: http://support.microsoft.com/kb/827745

    Read the article

  • OpenGL CPU vs. GPU

    - by Nitrex88
    So I've always been under the impression that doing work on the GPU is always faster than on the CPU. Because of this, in OpenGL, I usually try to do intensive tasks in shaders so they get the speed boost from the GPU. However, now I'm starting to realize that some things simply work better on the CPU and actually perform worse on the GPU (particularly when a geometry shader is involved). For example, in a recent project I did involving procedurally generated terrain, I tried passing a grid of single triangles into a geometry shader, and tesselated each of these triangles into quads with 400 vertices whose height was determined by a noise function. This worked fine, and looked great, but easily maxed out the GPU with only 25 base triangles and caused a very slow framerate. I then discovered that tesselating on the CPU instead, and setting the height (using noise function) in the vertex shader was actually faster! This prompted me to question the benefits of using the GPU as much as possible... So, I was wondering if someone could describe the general pros and cons of using the GPU vs CPU for intensive graphics tasks. I know this mainly comes down to what your trying to achieve, so if necessary, use the above scenario to discuss why the "CPU + vertex shader" was actually faster than doing everything in the geometry shader on the GPU. It's possible my hardware (newest macbook pro) isn't optomized well for the geometry shader (thus causing the slow framerate). Also, I read that the vertex shader is very good with parallelism, and would love a quick explanation of how this may have played a role in speeding up my procedural terrain. Any info/advice about CPU/GPU/shaders would be awesome!

    Read the article

  • Finishing an iteration early

    - by f1dave
    I'd like some input on this on those working with agile methodologies... A current project is finding that development on our planned user stories is finishing some time before the end of the iteration, and that the testing effort and business acceptance is what's actually dragging us out longer towards the end. This means that the devs in question have spare time, and they're essentially going out to the iteration+1 backlog and starting work on cards there before our current iteration cards are 'done'. As iteration manager, I want to put a stop to this - I want a more team-orientated approach where the group takes ownership of getting all the cards done, as opposed to "Well, dev's done so what do I dev next?" The problem I face is convincing the team of this. On one hand, I understand why the devs don't want to test the code they've written (there are unit tests they write of course, but the manual testing to be done could be influenced by their bias). The team sees working ahead as making our next iterations easier, because a lot of the work is done before we start. I see this as screwing with the whole system of planning/actuals - but it's difficult to convince the team as to why this matters. What advice can you guys and girls give? How do we stop devs reaching ahead? What should they be doing instead? How much of a problem is this in the scheme of things, if things are still getting done?

    Read the article

  • How do I learn IPSec VPN implementation on FreeBSD from pfSense

    - by Lang Hai
    I've been trying to figure out a complete working solution for IPSec VPN implementation on FreeBSD but with no luck till now. pfSense seems did a fantastic job on supporting IPSec and even for mobile clients, so I downloaded and installed pfSense hoping to figure out how it works, or at least see some configuration examples, but I couldn't find anything interesting maybe because I'm not familiar with pfSense, so I'd like to ask for help. How pfSense implements IPSec, what tools are used? Where does pfSense store all its configuration files? And since pfSense has its own kernel mods and acts as a different OS, there's no way for us to install it on top of an existing FreeBSD box, and plus that it is such a great project combining those fantastic features, so my question can kinda be extended as: How do we learn from pfSense, and implement its features on top of a regular FreeBSD server?

    Read the article

  • BI Applications Test Drive: Joint Partner+Oracle Go To Market Initiatives

    - by Mike.Hallett(at)Oracle-BI&EPM
     A challenge you may be facing is how to easily show the business value of BI to a set of customers.  The key we find to achieve this is to show best in class business analytic examples specific to a business person's role and needs - e.g. "HR analytics" for HR professionals, "Spend Analytics" for procurement professionals, and so on. We have created for you, our specialised partners, the ability to run Oracle BI Applications Test Drive Workshops for your customers. These are carefully scripted to allow a customer business person (usually not IT) to navigate for themselves around a series of dashboards and analysis targetted to show how BI can help their business and drive ROI. These Oracle BI Applications Test Drive kits (in English) are now downloadable from our OMS4P/OPN portal . See it by clicking on this link:http://www.oracle.com/partners/secure/marketing/bi-apps-test-drive-519829.htmlThis kit translation into Italian, French, Spanish and German will be added to this portal soon. NOTE: These are not designed for "training" customers: they really address the need for an effective call to action for any customer you talk to who is in the early stages of exploring their options and the business benefits of a BI project, especially if they are already an Oracle applications customer (eBusiness suite, Peoplesoft, Siebel, JDE). For more demand generation kits see another blog article "Joint Partner+Oracle Go To Market Initiatives: BI Customer Event Kits"

    Read the article

  • Redhat cluster Vs Pacemaker Vs Gluster Vs Sheepdog

    - by chandank
    Changing the entire question as earlier one was very confusing. I have been exploring different clustering system to run Virtual machines on two different machines on LAN with high availability. Currently I am already using DRBD resource on two different machines on Primary/Secondary mode. In case the primary fails I manually promote the secondary to Primary and start the VM. I also explored Gluster and looks good, however, I would rather prefer clustering over Gluster (user space FS). So if anyone has idea which one would be better from ease of use prospective please I would be interested in. Moreover, sheepdog project appears good, however, could not find much documentations/Howtos. I am using Centos 6.

    Read the article

  • What is a generic term for name/identifier? (as opposed to label)

    - by d3vid
    I need to refer to a number of things that have both an identifier value (used in code and configuration), and a human-readable label. These things include: database columns dropdown items subapplications objects stored in a dictionary I want two unambiguous terms. One to refer to the identifier/value/key. One to refer to the label. As you can see, I'm pretty settled on the latter :) For the former, identifier seems best (not everything is strictly a key, and value and name could refer to the label; although, identifier usually refers only to a variable name), but I would prefer to follow an established practice if there is one. Is there an established term for this? (Please provide a source.) If not, are there any examples of a choice from a significant source (Java APIs, MSDN, a big FLOSS project)? (I wasn't sure if this should be posted here or to English Language & Usage. I thought this was the more appropriate expert audience. Happy to migrate if not.)

    Read the article

  • Comparing the Performance of Visual Studio's Web Reference to a Custom Class

    As developers, we all make assumptions when programming. Perhaps the biggest assumption we make is that those libraries and tools that ship with the .NET Framework are the best way to accomplish a given task. For example, most developers assume that using ASP.NET's Membership system is the best way to manage user accounts in a website (rather than rolling your own user account store). Similarly, creating a Web Reference to communicate with a web service generates markup that auto-creates a proxy class, which handles the low-level details of invoking the web service, serializing parameters, and so on. Recently a client made us question one of our fundamental assumptions about the .NET Framework and Web Services by asking, "Why should we use proxy class created by Visual Studio to connect to a web service?" In this particular project we were calling a web service to retrieve data, which was then sorted, formatted slightly and displayed in a web page. The client hypothesized that it would be more efficient to invoke the web service directly via the HttpWebRequest class, retrieve the XML output, populate an XmlDocument object, then use XSLT to output the result to HTML. Surely that would be faster than using Visual Studio's auto-generated proxy class, right? Prior to this request, we had never considered rolling our own proxy class; we had always taken advantage of the proxy classes Visual Studio auto-generated for us. Could these auto-generated proxy classes be inefficient? Would retrieving and parsing the web service's XML directly be more efficient? The only way to know for sure was to test my client's hypothesis. Read More >

    Read the article

  • Slow to sync music files?

    - by pst007x
    I have created a folder in the Ubuntu One sync folder and called it music. I have added various albums and the folder has started to sync. However the files were added back in October and still only the folders have synced and no music files. I tested this service before and added a single music file directly into the Ubuntu One folder (no sub folders) and within a few days it synced, however it seems anything in sub folders seem to stall or take a very long time. My ubuntu One program always says syncing and the progression bar creeps, but still no files synced. I know there are issues with speed but a month and going to sync? I recently tested the same files with Dropbox and it took 9 hours. I have port forwarded the https (443) port both in the software firewall and in my router, I tried disabling both firewalls too, either way it makes no difference. I have also tried both from home and the office on different Ubuntu systems. Is there anything anyone has done to improve this service? I am trying to integrate Ubuntu One service into the office to share project files but the syncing is taking to long. I am using the latest Ubuntu 10.10 (fully updated, fresh install), I love Ubuntu and wish to continue to support it anyway I can, so a solution would be good :-) Any help would be appreciated. Thanks Paul

    Read the article

  • How broad should a computer science/engineering student go?

    - by AskQuestions
    I have less than 2 years of college left and I still don't know what to focus on. But this is not about me, this is about being a future developer. I realize that questions like "Which language should I learn next?" are not really popular, but I think my question is broader than that. I often see people write things like "You have to learn many different things. Being a developer is not about learning one programming language / technology and then doing that for the rest of your life". Well, sure, but it's impossible to really learn everything thoroughly. Does that mean that one should just learn the basics of everything and then learn some things more thoroughly AFTER getting a particular job? I mean, the best way to learn programming is by actually programming stuff... But projects take time. Does an average developer really switch between (for example) being a web developer, doing artificial intelligence and machine learning related stuff and programming close to the hardware? I mean, I know a lot of different things, but I don't feel proficient in any of those things. If I want to find a job as a web developer (that's just an example) after I finish college, shouldn't I do some web related project (maybe using something I still don't know) rather than try to learn functional programming? So, the question is: How broad should a computer science student's field of focus be? One programming language is surely far too narrow, but what is too broad?

    Read the article

  • StarterSTS 1.5

    - by Your DisplayName here!
    I have the 1.5 version of StarterSTS sitting here for quite some time now. But I was always reluctant to release it. Some of the reasons are: too many new features for a single (small) version change. to many features that are optional, like bridged authentication and thus make the code very complex. the way I implemented Azure integration adds a dependency on the Azure SDK, even for “on-premise” installations. I don’t like that. the fact I am using some WebForms bits and some WCF bits, the URL structure got messy. WebForms also don’t help a lot in testability All of the above reasons together plus the fact that I am the only architect, developer and tester on this project made me come to the conclusion that I will cancel this release. But wait… StarterSTS 1.5 is fully functional. We use both the on-premise and Azure versions internally “in production”. Cancelling means I will release the latest source code on Codeplex – but will not mark it as a “recommended release”. I also won’t produce updated screen casts and docs. Bu the setup is very similar to earlier versions. Feel free to use and customize 1.5 and give me feedback. On the good news front, I am working on a new version – welcome thinktecture IdentityServer. This version is based on MVC3 and the routing architecture, removed a lot of the clutter, has a SQL CE4 based configuration system, is more extensible – and in overall just cleaner. I will be able to upload CTPs very soon.

    Read the article

  • Double VPN Network Authentication

    - by Pyromanci
    I have a project I'm working on and looking for some info. Right now I have a VPN network using Cisco Pix 501's for the vpn clients and a Cisco VPN Concentrator 3000 for the VPN Server. Since the Pix is constantly connected to the vpn, I want to add a extra level of authentication. Meaning when the user on the other end goes to access anything on the VPN they are asked for a username password before the connection is established. I've never done this sort of structure before. So I'm not even sure where to really being or even if my current hardware can do something like this, or if i need to through in some sort of radius/LDAP/Active Directory type server into the mix.

    Read the article

  • NRF Big Show 2011 -- Part 1

    - by David Dorf
    When Apple decided to open retail stores, they came to 360Commerce (now part of Oracle Retail) to help with the secret project. Similarly, when Disney Stores decided to reinvent itself, they also came to us for their POS system. In both cases visiting a store is an experience where sales take a backseat to entertainment, exploration, and engagement This quote from a recent Stores Magazine article says it all: "We compete based on an experience, emotion and immersion like Disney," says Neal Lassila, vice president of global information technology for Disney. "That's opposed to [competing] on price and hawking a doll for $19.99. There is no sales pressure technique." Instead, it's about delivering "a great time." While you're attending the NRF conference in New York next week, you'll definitely want to stop by the new 20,000 square-foot Disney store in Times Square. If you're not attending, you can always check out the videos to get a feel for the stores' vibe. This year we've invited Disney Stores to open a pop-up store within the Oracle Retail booth. There will be lots of items on sale that fit in your suitcase, and there's no better way to demonstrate our POS, including the mobile POS running on an iPod Touch. You should also plan to attend Tuesday morning's super-session The Magic of the Disney Store: An Immersive Retail Experience with Steve Finney. In the case of Apple and Disney, less POS is actually a good thing. In both cases it was important to make the checkout process fast and easy so as not to detract from the overall experience. There will be ample opportunities to see this play out in New York next week, so I hope you take advantage.

    Read the article

  • Activate Your Monitor via Motion Trigger

    - by ETC
    Most people are in the habit of jiggling their mouse or tapping their keyboard when they want to wake their monitor. This clever electronics hack adds a sensor to your computer for motion-based monitor activation. At the DIY and electronics blog Radio Etcetera they tackled an interesting project and shared the build guide. Their local volunteer fire department needed a monitor on for quick information checks but they didn’t need it on all the time and they didn’t want to have to walk over and activate the monitor when they needed it. The solution involved hacking a simple infrared security sensor and wiring it via USB to send a mouse command when motion is detected in the room. Fire fighter walks in, monitor turns on and displays information; fire fighter leaves and the monitor goes back to sleep. Hit up the link below to see additional photos, schematics, and the complete build guide. Motion Activated PC Monitor [Radio Etcetera via Hack A Day] Latest Features How-To Geek ETC How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Lucky Kid Gets Playable Angry Birds Cake [Video] See the Lord of the Rings Epic from the Perspective of Mordor [eBook] Smart Taskbar Is a Thumb Friendly Android Task Launcher Comix is an Awesome Comics Archive Viewer for Linux Get the MakeUseOf eBook Guide to Speeding Up Windows for Free Need Tech Support? Call the Star Wars Help Desk! [Video Classic]

    Read the article

  • Authentication system brainstorm

    - by gansbrest
    Hi. We got multiple small websites (microsites) and one main high traffic one with big users base. Right now the requirement is to build authentication system which should allow users to loign with the same identity across the network. All website are running on different domains, powered by Drupal 6 CMS and have separate databases (so sharing tables with prefix is not an option + it creates a huge mess in the db). Here is the set of core requirements I came up with: Users should be able to login with the same credentials to all sites within the network User’s data sharing between Main site (storage) and all micro sites within the network Data synchronization across the network when user changes the data (update email or password for example) The login/registration process should be seamless and consistent Register on any of the sites across the network and use that identity to login later on. In the future there might be a need to add openid authentication options. Basically we are looking at something similar stackexchange does, but not sure if they have central users base on not. I was thinking about custom solution which will include 2 parts (modules), one will be stored on the Main site for users data storing and responding to requests from clients. Second part (module) will be placed on each microsite, which is going to send requests to the Master. Some kind of client - server setup. One of the complications I see right away is #3. Data Synhcronization across the network. I just don't want to reinvent the wheel and maybe some work is already done in this direction. Looking forward to your ideas on how to approach this project. EDIT: We use MySQL database

    Read the article

  • How should developers handle subpar working conditions? [closed]

    - by ivar
    I have been working in my current job for less than a year and at the beginning didn't have the courage to say anything about the things that bothered me. Now I'm a bit fed up and need things to get better. The first problem is not random but I'll mention it anyway. We are running out of space so every new employee gets a smaller table. We are promised that the space problem will be fixed soon. Almost every employee has a different keyboard, mouse, headphones (if any). Mine are $10 keyboard, some random cheap mouse and some random crappy headphones with a mic. All these were used and dirty when I got them. The number of monitor is 1-3 and with different sizes. I have 2 nice monitors and can't complain but some are given 1 small monitor. When it's their first job they don't have the guts to ask for 2 even if most others have 2. Nobody seems to care too. Project manager asked if it's ok? He obviously said he can handle the 1 small one. Then the manager said you can go ask for 1 more. I'm watching this and think go and ask where? The company is trying to hire more people but is not doing much after the person has signed the contract. We are put in one room that is open to the hallway and it's super noisy. Almost like a zoo at times. Even if nobody is talking the crappy keyboards make too much noise. Is this normal? Am I too negative and should I just do my job with what I was given? Should I demand better things? Should the company have some system that everybody gets things in some price range?

    Read the article

  • Why the recent shift to removing/omitting semicolons from Javascript?

    - by Jonathan
    It seems to be fashionable recently to omit semicolons from Javascript. There was a blog post a few years ago emphasising that in Javascript, semicolons are optional and the gist of the post seemed to be that you shouldn't bother with them because they're unnecessary. The post, widely cited, doesn't give any compelling reasons not to use them, just that leaving them out has few side-effects. Even GitHub has jumped on the no-semicolon bandwagon, requiring their omission in any internally-developed code, and a recent commit to the zepto.js project by its maintainer has removed all semicolons from the codebase. His chief justifications were: it's a matter of preference for his team; less typing Are there other good reasons to leave them out? Frankly I can see no reason to omit them, and certainly no reason to go back over code to erase them. It also goes against (years of) recommended practice, which I don't really buy the "cargo cult" argument for. So, why all the recent semicolon-hate? Is there a shortage looming? Or is this just the latest Javascript fad?

    Read the article

  • Suitable SDK to develop quick game?

    - by gRnt
    I'm currently undertaking a personal project at home that I need to turn around inside the next few months (which working full time and still learning programming makes it a tad difficult). I'm looking for suggestions on SDK's or tools (preferably free or that come with games, similar to steam tools) that I can use to develop a "game". I'm OK with coding but have no 3D development skills at all. I've very little experience with mod tools or SDK's at all but I'm hoping someone can point me in the direction of one that does the following: A decent library of prefab 3D models to build scenes. Ability to add scripting to the scene I've used Unity before and would prefer to continue to do so however I really have the worst 3D skills imaginable and can't waste time learning them. I'd be looking for pre-fab items that are both industrial and possibly more lush environments (trees etc). If it makes any difference (due to licencing and what-not) I WILL NOT be selling this game or marketing it in any way and I am a University Student if any places do educations licences. Another alternative would be to source free 3d models elsewhere but again while I'm still learning I have no idea where to look if someone could point me in the right direction I'll do the rest of the digging. Thanks

    Read the article

  • Write DAX queries in Report Builder #ssrs #dax #ssas #tabular

    - by Marco Russo (SQLBI)
    If you use Report Builder with Reporting Services, you can use DAX queries even if the editor for Analysis Services provider does not support DAX syntax. In fact, the DMX editor that you can use in Visual Studio editor of Reporting Services (see a previous post on that), is not available in Report Builder. However, as Sagar Salvi commented in this Microsoft Connect entry, you can use the DAX query text in the query of a Dataset by using the OLE DB provider instead of the Analysis Services one. I think it’s a good idea to show the steps required. First, create a DataSet using the OLE DB connection type, and provide the connection string the provider (Provider), the server name (Data Source) and the database name (Initial Catalog), such as: Provider=MSOLAP;Data Source=SERVERNAME\\TABULAR;Initial Catalog=AdventureWorks Tabular Model SQL 2012 Then, create a Dataset using the data source previously defined, select the Text query type, and write the DAX code in the Query pane: You can also use the Query Designer window, that doesn’t provide any particular help in writing the DAX query, but at least can show a preview of the result of the query execution. I hope DAX will get better editors in the future… in the meantime, remember you can use DAX Studio to write and test your DAX queries, and DAX Formatter to improve their readability!If you want to learn the DAX Query Language, I suggest you watching my video Data Analysis Expressions as a Query Language on Project Botticelli!

    Read the article

  • How do I account for changed or forgotten tasks in an estimate?

    - by Andrew
    To handle task-level estimates and time reporting, I have been using (roughly) the technique that Steve McConnell describes in Chapter 10 of Software Estimation. Specifically, when the time comes for me to create task-level estimates (right before coding begins on a project), I determine the tasks at a fairly granular level so that, whenever possible, I have no tasks with a single-point, 50%-confidence estimate greater than four hours. That way, the task estimation process helps with constructing the software while helping me not to forget tasks during estimation. I come up with a range of hours possible for each task also, and using the statistical calculations that McConnell describes along with my historical accuracy data, I can generate estimates at other confidence levels when desired. I feel like this method has been working fairly well for me. We are required to put tasks and their estimates into TFS for tracking, so I use the estimates at the percentage of confidence I am told to use. I am unsure, however, what to do when I do forget a task, or I end up needing to do work that does not neatly fall within one of the tasks I estimated. Of course, trying to avoid this situation is best, but how do I account for forgotten/changed tasks? I want to have the best historical data I can to help me with future estimates, but right now, I basically am just calculating whether I made the 50%-confidence estimate and whether I made it inside the ranged estimate. I'll be happy to clarify what I'm asking if needed -- let me know what is unclear.

    Read the article

  • Inheriting projects - General Rules?

    - by pspahn
    This is an area of discussion I have long been curious about, but overall, I generally lack the experience to give myself an answer that I would fully trust. We've all been there, a new client shows up with a half-complete project they are looking to finish and launch. For whatever reason, they fired their previous developer, and it's now up to you to save the day. I am just finishing up a code review for a new client, and in my estimation is would be better to scrap what the previous developers built since and start from scratch. There's a ton of reasons why I am leaning toward this way, but it still makes me nervous since the client isn't going to want to hear "those last guys built you a big turd, and I can either polish it, or throw it in the trash". What are your general rules for accepting these projects? How do you determine whether it will be better to start from scratch or continue with the existing code base? What other extra steps might you take to help control client expectations, since the previous developer may have inflated those expectations beyond a reasonable level? Any other general advice?

    Read the article

< Previous Page | 682 683 684 685 686 687 688 689 690 691 692 693  | Next Page >