Search Results

Search found 30309 results on 1213 pages for 'object relationships'.

Page 332/1213 | < Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >

  • Build-time dependency resolving coming to Entity Framework. Now, how about those BI tools too?

    - by jamiet
    Three months ago I wrote a blog post entitled Some thoughts on Visual Studio database references and how they should be used for SQL Server BI where I shared some thoughts on a feature available to database developers in Visual Studio 2010 that I would love to see added to SQL Server Integration Services (SSIS), Analysis Services (SSAS) and Reporting Services (SSRS). In there I said: Over the past few weeks I have been making heavy use of the Database tools in Visual Studio 2010 and one of the features that has most impressed me has been database references.   Database references allow you to have stored procedures in your database project that refer to objects (tables, views, stored procedures etc…) that exist in other database projects and hence when you build your database project it is able to resolve those references.   It occurred to me that similar functionality would be incredibly useful for SQL Server Integration Services(SSIS), Analysis Services (SSAS) & Reporting Services (SSRS) projects. After all reports, packages and data source views are rife with references to database objects – why shouldn’t we be able to have design-time dependency checking in our BI projects the same way that database and .Net developers do? In that blog post I shared links to three Connect submissions where I requested this feature be added to SSIS, SSAS & SSRS. In addition I also submitted a request that the feature be extended to .Net projects so that any reference to a database object in a .Net assembly can be resolved at build time. That Connect submission is at [Entity FX] Use database references to constrain the EDM and overnight it received this comment from Microsoft: We have been working on this feature for a while and and will be available soon This is really good news - it improves the Microsoft developer ecosystem by ensuring invalid references to database references get caught at build time (ideally as part of a Continuous integration build) rather than run time. [Hopefully it might nip this code-first nonsense in the bud too (Ooo...way to incite flame comments :) ) ]. If you want to see this feature in action then check out a video from Teched Europe last month entitled SQL Server Developer Tools Code-named "Juneau" where it is demo'd by Lance Delano and Tim Laverty.   The point of this blog post though is not just to draw attention to this forthcoming feature for .Net developers, it is to ask you to petition Microsoft to get this feature added to SSIS/SSAS/SSRS too. After all, we already know (from the video above) that the feature is coming to this new code-name Juneau development environment plus we also know that Juneau will be the development environment for SSIS/SSAS/SSRS as well - is it really much of a stretch to expect the BI tools to have access to this great feature too? I don't think so and if you agree with me then I urge you to vote and add a comment to the Connection submissions that are requesting this feature. They are at: [SSAS] Declare Object Dependancies [SSRS] Declare Object Dependancies [SSIS] Declare Object Dependancies (Update, Apparently someone at Microsoft has deemed it necassary to set this to private and I am not able to change it back even though I submitted it. You can still vote on the other two though.) Let's close that SQL Developer Gap!   @Jamiet    

    Read the article

  • How can an SQL relational database be used to model a thesaurus? [closed]

    - by Miles O'Keefe
    I would like to design a web app that functions as a simple thesaurus: a long list of words with attributes, all of which are linked to each other. This thesaurus data model can be defined as: a controlled vocabulary arranged in a known order in which equivalence, hierarchical, and associative relationships among terms are clearly displayed and identified by standardized relationship indicators. My idea so far is to have one database in which every word is a table, and every table contains all words related to that word. e.g. Thesaurus(database) - happy(table) - excited(row)|cheerful(row)|lively(row) Is there are more efficient way to store words and their relationship to other words in a relational SQL database?

    Read the article

  • Introdução ao NHibernate on TechDays 2010

    - by Ricardo Peres
    I’ve been working on the agenda for my presentation titled Introdução ao NHibernate that I’ll be giving on TechDays 2010, and I would like to request your assistance. If you have any subject that you’d like me to talk about, you can suggest it to me. For now, I’m thinking of the following issues: Domain Driven Design with NHibernate Inheritance Mapping Strategies (Table Per Class Hierarchy, Table Per Type, Table Per Concrete Type, Mixed) Mappings (hbm.xml, NHibernate Attributes, Fluent NHibernate, ConfORM) Supported querying types (ID, HQL, LINQ, Criteria API, QueryOver, SQL) Entity Relationships Custom Types Caching Interceptors and Listeners Advanced Usage (Duck Typing, EntityMode Map, …) Other projects (NHibernate Validator, NHibernate Search, NHibernate Shards, …) ASP.NET Integration ASP.NET Dynamic Data Integration WCF Data Services Integration Comments?

    Read the article

  • BizTalk Server 2010 Beta available

    - by Rajesh Charagandla
    BizTalk Server 2010 Beta - Click Here to Download Overview: BizTalk Server 2010 offers significant enhancements to help integrate heterogeneous Line-of-business systems with Windows .NET and SharePoint based applications to optimize user productivity, gain business efficiency and increase agility . BizTalk Server 2010 allow .Net developers to take advantage of BizTalk services right out of the box to rapidly build solutions that need to integrate transactions and data from applications like SAP, Mainframes, MS Dynamics and Oracle. Similarly SharePoint developers can seamlessly use BizTalk services directly through the new Business Connectivity Services in SharePoint 2010. BizTalk Server 2010 includes new data mapping & transformation tool to dramatically reduce the development time to mediate data exchange between disparate systems. It also provide a new single dashboard to manage performance parameters and streamline deployments from development to test to production. BizTalk 2010 includes new, scalable Trading Partner Management (TPM) model with a graphical interface for flexible management of business partner relationships and efficient on-boarding process.

    Read the article

  • What are the roles of a Software Delivery Manager

    - by Rich
    I have been told about a position that may be open to me - the role of a Software Delivery Manager. From what I understand this role does not already exist within my organisation. To be perfectly honest I'm not quite sure what a Software Delivery Manager's roles are. I have a few ideas and would appreciate some input around whether they are correct or not, or if there is anything missing: ensure the quality of the software being delivered document the relationships between the components being delivered ensure that the delivery of these components does not break other components ensure that the components being developed make the best use of the environments they are being deployed in being on-hand during software deliveries (though not actually performing the delivery of software, rather giving the Go) I have also been told that the role would include some software development work (which is important to me being a developer at heart!) - is there software development specifically associated with the role of Software Delivery Manager or is this more likely to just be a case of helping the team out when time is short?

    Read the article

  • Improving Click and Drag with C++

    - by Josh
    I'm currently using SFML 2.0 to develop a game in C++. I have a game sprite class that has a click and drag method. The method works, but there is a slight problem. If the mouse moves too fast, the object the user selected can't keep up and is left behind in the spot where the mouse left its bounds. I will share the class definition and the given function implementation. Definition: class codePeg { protected: FloatRect bounds; CircleShape circle; int xPos, yPos, xDiff, yDiff, once; int xBase, yBase; Vector2i mousePos; Vector2f circlePos; public: void init(RenderWindow& Window); void draw(RenderWindow& Window); void drag(RenderWindow& Window); void setPegPosition(int x, int y); void setPegColor(Color pegColor); void mouseOver(RenderWindow& Window); friend int isPegSelected(void); }; Implementation of the "drag" function: void codePeg::drag(RenderWindow& Window) { mousePos = Mouse::getPosition(Window); circlePos = circle.getPosition(); if(Mouse::isButtonPressed(Mouse::Left)) { if(mousePos.x > xPos && mousePos.y > yPos && mousePos.x - bounds.width < xPos && mousePos.y - bounds.height < yPos) { if(once) { xDiff = mousePos.x - circlePos.x; yDiff = mousePos.y - circlePos.y; once = 0; } xPos = mousePos.x - xDiff; yPos = mousePos.y - yDiff; circle.setPosition(xPos, yPos); } } else { once = 1; xPos = xBase; yPos = yBase; xDiff = 0; yDiff = 0; circle.setPosition(xBase, yBase); } Window.draw(circle); } Like I said, the function works, but to me, the code is very ugly and I think it could be improved and could be more efficient. The only thing I can think of as to why the object cannot keep up with the mouse is that there are too many function calls and/or checks. The user does not really have to mouse the mouse "fast" for it to happen, I would say at an average pace the object is left behind. How can I improve the code so that the object remains with the mouse when it is selected? Any help improving this code or giving advice is greatly appreciated.

    Read the article

  • Getting the number of fragments which passed the depth test

    - by Etan
    In "modern" environments, the "NV Occlusion Query" extension provides a method to get the number of fragments which passed the depth test. However, on the iPad / iPhone using OpenGL ES, the extension is not available. What is the most performant approach to implement a similar behaviour in the fragment shader? Some of my ideas: Render the object completely in white, then count all the colors together using a two-pass shader where first a vertical line is rendered and for each fragment the shader computes the sum over the whole row. Then, a single vertex is rendered whose fragment sums all the partial sums of the first pass. Doesn't seem to be very efficient. Render the object completely in white over a black background. Downsample recursively, abusing the hardware linear interpolation between textures until being at a reasonably small resolution. This leads to fragments which have a greyscale level depending on the number of white pixels where in their corresponding region. Is this even accurate enough? Use mipmaps and simply read the pixel on the 1x1 level. Again the question of accuracy and if it is even possible using non-power-of-two textures. The problem wit these approaches is, that the pipeline gets stalled which results in major performance issues. Therefore, I'm looking for a more performant way to accomplish my goal. Using the EXT_OCCLUSION_QUERY_BOOLEAN extension Apple introduced EXT_OCCLUSION_QUERY_BOOLEAN in iOS 5.0 for iPad 2. "4.1.6 Occlusion Queries Occlusion queries use query objects to track the number of fragments or samples that pass the depth test. An occlusion query can be started and finished by calling BeginQueryEXT and EndQueryEXT, respectively, with a target of ANY_SAMPLES_PASSED_EXT or ANY_SAMPLES_PASSED_CONSERVATIVE_EXT. When an occlusion query is started with the target ANY_SAMPLES_PASSED_EXT, the samples-boolean state maintained by the GL is set to FALSE. While that occlusion query is active, the samples-boolean state is set to TRUE if any fragment or sample passes the depth test. When the occlusion query finishes, the samples-boolean state of FALSE or TRUE is written to the corresponding query object as the query result value, and the query result for that object is marked as available. If the target of the query is ANY_SAMPLES_PASSED_CONSERVATIVE_EXT, an implementation may choose to use a less precise version of the test which can additionally set the samples-boolean state to TRUE in some other implementation dependent cases." The first sentence hints on a behavior which is exactly what I'm looking for: getting the number of pixels which passed the depth test in an asynchronous manner without much performance loss. However, the rest of the document describes only how to get boolean results. Is it possible to exploit this extension to get the pixel count? Does the hardware support it so that there may be hidden API to get access to the pixel count? Other extensions which could be exploitable would be debugging features like the number of times the fragment shader was invoked (PSInvocations in DirectX - not sure if something simila is available in OpenGL ES). However, this would also result in a pipeline stall.

    Read the article

  • 9/13 Live Webcast!!! Drive Innovation from Big Data Don't delay - register now!

    - by jgelhaus
    Big data solutions can help you find new insights, capitalize on hidden relationships, and deliver new value to your business. But to derive real business value from big data, you need the right tools and the right strategy. Join the live 9/13 Webcast to get an inside look at the benefits of big data and how you can realize them in your own IT infrastructure. We’ll discuss: The defining characteristics of big data Various big data use cases and examples Requirements for new skills and software Highlights of the Oracle big data platform Register now for the live Webcast on 9/13! It's your chance to talk with the Big Data gurus and discover solutions to data challenges that have eluded your data center—until now.

    Read the article

  • How to define template directives (from an API perspective)?

    - by Ralph
    Preface I'm writing a template language (don't bother trying to talk me out of it), and in it, there are two kinds of user-extensible nodes. TemplateTags and TemplateDirectives. A TemplateTag closely relates to an HTML tag -- it might look something like div(class="green") { "content" } And it'll be rendered as <div class="green">content</div> i.e., it takes a bunch of attributes, plus some content, and spits out some HTML. TemplateDirectives are a little more complicated. They can be things like for loops, ifs, includes, and other such things. They look a lot like a TemplateTag, but they need to be processed differently. For example, @for($i in $items) { div(class="green") { $i } } Would loop over $items and output the content with the variable $i substituted in each time. So.... I'm trying to decide on a way to define these directives now. Template Tags The TemplateTags are pretty easy to write. They look something like this: [TemplateTag] static string div(string content = null, object attrs = null) { return HtmlTag("div", content, attrs); } Where content gets the stuff between the curly braces (pre-rendered if there are variables in it and such), and attrs is either a Dictionary<string,object> of attributes, or an anonymous type used like a dictionary. It just returns the HTML which gets plunked into its place. Simple! You can write tags in basically 1 line. Template Directives The way I've defined them now looks like this: [TemplateDirective] static string @for(string @params, string content) { var tokens = Regex.Split(@params, @"\sin\s").Select(s => s.Trim()).ToArray(); string itemName = tokens[0].Substring(1); string enumName = tokens[1].Substring(1); var enumerable = data[enumName] as IEnumerable; var sb = new StringBuilder(); var template = new Template(content); foreach (var item in enumerable) { var templateVars = new Dictionary<string, object>(data) { { itemName, item } }; sb.Append(template.Render(templateVars)); } return sb.ToString(); } (Working example). Basically, the stuff between the ( and ) is not split into arguments automatically (like the template tags do), and the content isn't pre-rendered either. The reason it isn't pre-rendered is because you might want to add or remove some template variables or something first. In this case, we add the $i variable to the template variables, var templateVars = new Dictionary<string, object>(data) { { itemName, item } }; And then render the content manually, sb.Append(template.Render(templateVars)); Question I'm wondering if this is the best approach to defining custom Template Directives. I want to make it as easy as possible. What if the user doesn't know how to render templates, or doesn't know that he's supposed to? Maybe I should pass in a Template instance pre-filled with the content instead? Or maybe only let him tamper w/ the template variables, and then automatically render the content at the end? OTOH, for things like "if" if the condition fails, then the template wouldn't need to be rendered at all. So there's a lot of flexibility I need to allow in here. Thoughts?

    Read the article

  • New Paper on the PeopleSoft Interaction Hub-PeopleTools Relationship

    - by Matthew Haavisto
    A new paper has just been published that explains the relationships and dependencies between the PeopleSoft Interaction Hub (formerly the PeopleSoft Applications Portal), and PeopleTools.  This paper will help you understand which versions of the Hub work with which versions of Tools.  The paper contains information on how new customers can install the PeopleSoft Interaction Hub, and existing PeopleSoft Interaction Hub customers can apply PIH 9.1 Feature Pack 1 functionality if they are on an earlier version. It also describes how PeopleSoft Interaction Hub releases are aligned with PeopleTools releases, the general upgrade process within the Feature Pack model, and how customers can expect this to work with subsequent feature packs, maintenance packs, and bundles. You can get the paper from Oracle support.

    Read the article

  • Wiki based requirements engineering tool

    - by Shanon
    Hi, I'm looking to to build a wiki based tool the helps/aides in the requirements engineering process. More specifically I am hoping to end up with a tool that helps inexperienced users easily create and design requirements documents on a wiki platform. I was wondering if there exist any wiki/wiki platforms that either already exist or are easily extendible or would be worth looking at that for this purpose. For instance some of the features I was hoping to add would be to add structure to a document so that information is filled out in a standardised manner. Another idea I was looking at was to somehow create relationships between different types of documents (for example- a goal diagram gets evolves/ helps in the development of the class diagram). So far I have come across FOSwiki which claims to to fully customisalble...but I'm not sure what it means and what I can really do with that. Any input on FOSwiki is also highly appreciated.

    Read the article

  • TechEd North America 2012–Day 2 #msTechEd #teched

    - by Marco Russo (SQLBI)
    This is the second day at TechEd North America 2012 and yesterday I had many conversations about PowerPivot and SSAS Tabular. In the evening the book signing at O’Reilley booth has been a big success! I’m writing this post from the speaker’s room. It’s not crowded this morning because the keynote is going on and there are no people also in the hall, everyone is in the keynote room. Today will be a very busy day: I’ll be staffing at Technical Learning Center from 12:30pm to 3:30pm so this is a first chance for joining the conversation about Tabular and DAX. But there is another choice this evening at Community Night starting at 6:30pm until 9:00pm. Join us at this Ask the Expert event! And, well, don’t miss the Many-to-Many Relationships in BISM Tabular from Alberto this afternoon at 5:00 pm in room S330E. Look at my yesterday’s post if you want to look at our full schedule for the week. Enjoy TechEd!

    Read the article

  • How to model interentity membership in entity-component architecture?

    - by croxis
    I'm falling in love with simple grace of entity-component design, although I still have issues breaking from MVC and OOP practices. Some of my game entities have membership relationships with each other (ex: a player is a member of a city, a city is a member of a nation), and I am unsure on the best way to implement it. My initial reaction is to have a a MemberOfCity component that points to the appropriate city component, but components are suppose to have no references to each other. My other option is to have a System do it, but that would require the system to persist data outside of a component. Is there a clean way to do this in an entity-component design, or am I trying to use a hammer on a screw and should use a hybrid/another approach?

    Read the article

  • Creating a SOLID Visual Studio Solution

    The SOLID acronym describes five object-oriented design principles that, when followed, produce code that is cleaner and more maintainable. The last principle, the Dependency Inversion Principle, suggests that details depend upon abstractions. Unfortunately, typical project relationships in .NET applications can make this principle difficult to follow. In this article, I'll describe how one can structure a set of projects in a Visual Studio solution such that DIP can be followed, allowing for the creation of a SOLID solution. You can download the sample solution and use it as a starting point for your new solutions if you like.

    Read the article

  • Is HR The New IT?

    - by Scott Ewart
    Is HR The New IT?  As recruitment, on-boarding and development head to the cloud and mobile devices put sophisticated tools into everyone’s hands, HR leaders are discovering that technology savvy and analytical skills are key to effective talent management. In this article by Ladan Nikravan in the September edition of Talent Management magazine, Oracle's own Chris Leone, SVP of Fusion Strategy, gives his take on how Technology trends such as social, mobile, big data and the cloud are creating a fundamental change in how employees and HR create value and relationships within the networked organization. Read the full article here: http://d27vj430nutdmd.cloudfront.net/23555/122778/122778.1.pdf

    Read the article

  • how to use ajax with json in ruby on rails

    - by rafik860
    I am implemeting a facebook application in rails using facebooker plugin, therefore it is very important to use this architecture if i want to update multiple DOM in my page. if my code works in a regular rails application it would work in my facebook application. i am trying to use ajax to let the user know that the comment was sent, and update the comments bloc. migration: class CreateComments < ActiveRecord::Migration def self.up create_table :comments do |t| t.string :body t.timestamps end end def self.down drop_table :comments end end controller: class CommentsController < ApplicationController def index @comments=Comment.all end def create @comment=Comment.create(params[:comment]) if request.xhr? @comments=Comment.all render :json=>{:ids_to_update=>[:all_comments,:form_message], :all_comments=>render_to_string(:partial=>"comments" ), :form_message=>"Your comment has been added." } else redirect_to comments_url end end end view: <script> function update_count(str,message_id) { len=str.length; if (len < 200) { $(message_id).innerHTML="<span style='color: green'>"+ (200-len)+" remaining</span>"; } else { $(message_id).innerHTML="<span style='color: red'>"+ "Comment too long. Only 200 characters allowed.</span>"; } } function update_multiple(json) { for( var i=0; i<json["ids_to_update"].length; i++ ) { id=json["ids_to_update"][i]; $(id).innerHTML=json[id]; } } </script> <div id="all_comments" > <%= render :partial=>"comments/comments" %> </div> Talk some trash: <br /> <% remote_form_for Comment.new, :url=>comments_url, :success=>"update_multiple(request)" do |f|%> <%= f.text_area :body, :onchange=>"update_count(this.getValue(),'remaining');" , :onkeyup=>"update_count(this.getValue(),'remaining');" %> <br /> <%= f.submit 'Post'%> <% end %> <p id="remaining" >&nbsp;</p> <p id="form_message" >&nbsp;</p> <br><br> <br> if i try to do alert(json) in the first line of the update_multiple function , i got an [object Object]. if i try to do alert(json["ids_to_update"][0]) in the first line of the update_multiple function , there is no dialog box displayed. however the comment got saved but nothing is updated. it seems like the object sent by rails is nil or cant be parsed by JSON.parse(json). questions: 1.how can javascript and rails know that i am dealing with json objects?deos ROR sent it a object format or a text format?how can it check that the json object has been sent 2.how can i see what is the returned json?do i have to parse it?how? 2.how can i debug this problem? 3.how can i get it to work?

    Read the article

  • Library order is important

    - by Darryl Gove
    I've written quite extensively about link ordering issues, but I've not discussed the interaction between archive libraries and shared libraries. So let's take a simple program that calls a maths library function: #include <math.h int main() { for (int i=0; i<10000000; i++) { sin(i); } } We compile and run it to get the following performance: bash-3.2$ cc -g -O fp.c -lm bash-3.2$ timex ./a.out real 6.06 user 6.04 sys 0.01 Now most people will have heard of the optimised maths library which is added by the flag -xlibmopt. This contains optimised versions of key mathematical functions, in this instance, using the library doubles performance: bash-3.2$ cc -g -O -xlibmopt fp.c -lm bash-3.2$ timex ./a.out real 2.70 user 2.69 sys 0.00 The optimised maths library is provided as an archive library (libmopt.a), and the driver adds it to the link line just before the maths library - this causes the linker to pick the definitions provided by the static library in preference to those provided by libm. We can see the processing by asking the compiler to print out the link line: bash-3.2$ cc -### -g -O -xlibmopt fp.c -lm /usr/ccs/bin/ld ... fp.o -lmopt -lm -o a.out... The flag to the linker is -lmopt, and this is placed before the -lm flag. So what happens when the -lm flag is in the wrong place on the command line: bash-3.2$ cc -g -O -xlibmopt -lm fp.c bash-3.2$ timex ./a.out real 6.02 user 6.01 sys 0.01 If the -lm flag is before the source file (or object file for that matter), we get the slower performance from the system maths library. Why's that? If we look at the link line we can see the following ordering: /usr/ccs/bin/ld ... -lmopt -lm fp.o -o a.out So the optimised maths library is still placed before the system maths library, but the object file is placed afterwards. This would be ok if the optimised maths library were a shared library, but it is not - instead it's an archive library, and archive library processing is different - as described in the linker and library guide: "The link-editor searches an archive only to resolve undefined or tentative external references that have previously been encountered." An archive library can only be used resolve symbols that are outstanding at that point in the link processing. When fp.o is placed before the libmopt.a archive library, then the linker has an unresolved symbol defined in fp.o, and it will search the archive library to resolve that symbol. If the archive library is placed before fp.o then there are no unresolved symbols at that point, and so the linker doesn't need to use the archive library. This is why libmopt needs to be placed after the object files on the link line. On the other hand if the linker has observed any shared libraries, then at any point these are checked for any unresolved symbols. The consequence of this is that once the linker "sees" libm it will resolve any symbols it can to that library, and it will not check the archive library to resolve them. This is why libmopt needs to be placed before libm on the link line. This leads to the following order for placing files on the link line: Object files Archive libraries Shared libraries If you use this order, then things will consistently get resolved to the archive libraries rather than to the shared libaries.

    Read the article

  • Hello NHibernate! Quickstart with NHibernate (Part 1)

    - by BobPalmer
    When I first learned NHibernate, I could best describe the experience as less of a learning curve and more like a learning cliff.  A large part of that was the availability of tutorials.  In this first of a series of articles, I will be taking a crack at providing people new to NHibernate the information they need to quickly ramp up with NHibernate. For the first article, I've decided to address the gap of just giving folks enough code to get started.  No UI, no fluff - just enough to connect to a database and do some basic CRUD operations.  In future articles, I will discuss a repository pattern for NHibernate, parent-child relationships, and other more advanced topics. You can find the entire article via this Google Docs link: http://docs.google.com/Doc?docid=0AUP-rKyyUMKhZGczejdxeHZfOGMydHNqdGc0&hl=en Enjoy! -Bob

    Read the article

  • The case against INFORMATION_SCHEMA views

    - by AaronBertrand
    In SQL Server 2000, INFORMATION_SCHEMA was the way I derived all of my metadata information - table names, procedure names, column names and data types, relationships... the list goes on and on. I used the system tables like sysindexes from time to time, but I tried to stay away from them when I could. In SQL Server 2005, this all changed with the introduction of catalog views. For one thing, they're a lot easier to type. sys.tables vs. INFORMATION_SCHEMA.TABLES? Come on; no contest there - even...(read more)

    Read the article

  • [EF + ORACLE] Updating and Deleting Entities

    - by JTorrecilla
    Prologue In previous chapters we have seen how to insert data through EF, with and without sequences. In this one, we are going to see how to Update and delete Data from the DB. Updating data The update of the Entity Data (properties) is a very common and easy action. Before of change any of the properties of the Entity, we can check the EntityState property, and we can see that is EntityState.Unchanged.   For making an update it is needed to get the Entity which will be modified. In the following example, I use the GetEmployeeByNumber to get a valid Entity: 1: EMPLEADOS emp=GetEmployeeByNumber(2); 2: emp.Name="a"; 3: emp.Phone="2"; 4: emp.Mail="aa"; After modifying the desired properties of the Entity, we are going to check again Entitystate property, which now has the EntityState.Modified value. To persist the changes to the DB is necessary to invoke the SaveChanges function of our context. 1: context.SaveChanges(); After modifying the desired properties of the Entity, we are going to check again Entitystate property, which now has the EntityState.Modified value. To persist the changes to the DB is necessary to invoke the SaveChanges function of our context. If we check again the EntityState property we will see that the value will be EntityState.Unchanged.   Deleting Data Another easy action is to delete an Entity.   The first step to delete an Entity from the DB is to select the entity: 1: CLIENTS selectedClient = GetClientByNumber(15); 2: context.CLIENTES.DeleteObject(clienteSeleccionado); Before invoking the DeleteObject function, we will check EntityStet which value must be EntityState.Unchanged. After deleting the object, the state will be changed to EntitySate.Deleted. To commit the action we have to invoke the SaveChanges function. Aftar that, the EntityState property will be EntityState.Detached. Cascade Entity Framework lets cascade updates and deletes, although I never see cascade updates. What is a cascade delete? A cascade delete is an action that allows to delete all the related object to the object we desire to delete. This option could be established in the DB manager, or it could be in the EF model designer. For example: With a given relation (1-N) between clients and requests. The common situation must be to let delete those clients whose have no requests. If we select the relation between both entities, and press the second mouse button, we can see the properties panel of the relation. The props are: This grid shows the relations indicating the Master table(Clients) and the end point (Cabecera or Requests) The property “End 1 OnDelete” indicates the action to do when a Entity from the Master will be deleted. There are two options: - None: No action will be done, it is said, if a Entity has details entities it could not be deleted. - Cascade: It will delete all related entities to the master Entity. If we enable the cascade delete in a relation, and we invoke the DeleteObject function of the set, we could observe that all the related object indicates a Entitystate.Deleted state. Like an update, insert or common delete, until we commit the changes with SaveChanges function, the data would not be commited. Si habilitamos el borrado en cascada de una relación, e invocamos a la función DeleteObject del conjunto, podremos observar que todas las entidades de Detalle (de la relación indicada) presentan el valor EntityState.Deleted en la propiedad EntityState. Del mismo modo que en el borrado, inserción o actualización, hasta que no se invoque al método SaveChanges, los cambios no van a ser confirmados en la Base de Datos. Finally In this chapter we have seen how to update a Entity, how to delete an Entity and how to implement Cascade Deleting through EF. In next chapters we will see how to query the DB data.

    Read the article

  • Rebuilding CoasterBuzz, Part II: Hot data objects

    - by Jeff
    This is the second post, originally from my personal blog, in a series about rebuilding one of my Web sites, which has been around for 12 years. More: Part I: Evolution, and death to WCF After the rush to get moving on stuff, I temporarily lost interest. I went almost two weeks without touching the project, in part because the next thing on my backlog was doing up a bunch of administrative pages. So boring. Unfortunately, because most of the site's content is user-generated, you need some facilities for editing data. CoasterBuzz has a database full of amusement parks and roller coasters. The entities enjoy the relationships that you would expect, though they're further defined by "instances" of a coaster, to define one that has moved between parks as one, with different names and operational dates. And of course, there are pictures and news items, too. It's not horribly complex, except when you have to account for a name change and display just the newest name. In all previous versions, data access was straight SQL. As so much of the old code was rooted in 2003, with some changes in 2008, there wasn't much in the way of ORM frameworks going on then. Let me rephrase that, I mostly wasn't interested in ORM's. Since that time, I used a little LINQ to SQL in some projects, and a whole bunch of nHibernate while at Microsoft. Through all of that experience, I have to admit that these frameworks are often a bigger pain in the ass than not. They're great for basic crud operations, but when you start having all kinds of exotic relationships, they get difficult, and generate all kinds of weird SQL under the covers. The black box can quickly turn into a black hole. Sometimes you end up having to build all kinds of new expertise to do things "right" with a framework. Still, despite my reservations, I used the newer version of Entity Framework, with the "code first" modeling, in a science project and I really liked it. Since it's just a right-click away with NuGet, I figured I'd give it a shot here. My initial effort was spent defining the context class, which requires a bit of work because I deviate quite a bit from the conventions that EF uses, starting with table names. Then throw some partial querying of certain tables (where you'll find image data), and you're splitting tables across several objects (navigation properties). I won't go into the details, because these are all things that are well documented around the Internet, but there was a minor learning curve there. The basics of reading data using EF are fantastic. For example, a roller coaster object has a park associated with it, as well as a number of instances (if it was ever relocated), and there also might be a big banner image for it. This is stupid easy to use because it takes one line of code in your repository class, and by the time you pass it to the view, you have a rich object graph that has everything you need to display stuff. Likewise, editing simple data is also, well, simple. For this goodness, thank the ASP.NET MVC framework. The UpdateModel() method on the controllers is very elegant. Remember the old days of assigning all kinds of properties to objects in your Webforms code-behind? What a time consuming mess that used to be. Even if you're not using an ORM tool, having hydrated objects come off the wire is such a time saver. Not everything is easy, though. When you have to persist a complex graph of objects, particularly if they were composed in the user interface with all kinds of AJAX elements and list boxes, it's not just a simple matter of submitting the form. There were a few instances where I ended up going back to "old-fashioned" SQL just in the interest of time. It's not that I couldn't do what I needed with EF, it's just that the efficiency, both my own and that of the generated SQL, wasn't good. Since EF context objects expose a database connection object, you can use that to do the old school ADO.NET stuff you've done for a decade. Using various extension methods from POP Forums' data project, it was a breeze. You just have to stick to your decision, in this case. When you start messing with SQL directly, you can't go back in the same code to messing with entities because EF doesn't know what you're changing. Not really a big deal. There are a number of take-aways from using EF. The first is that you write a lot less code, which has always been a desired outcome of ORM's. The other lesson, and I particularly learned this the hard way working on the MSDN forums back in the day, is that trying to retrofit an ORM framework into an existing schema isn't fun at all. The CoasterBuzz database isn't bad, but there are design decisions I'd make differently if I were starting from scratch. Now that I have some of this stuff done, I feel like I can start to move on to the more interesting things on the backlog. There's a lot to do, but at least it's fun stuff, and not more forms that will be used infrequently.

    Read the article

  • Meet @marcorus and @ferrarialberto at TechEd Europe 2012 #tee2012

    - by Marco Russo (SQLBI)
    I and Alberto are in Amsterdam this week at TechEd Europe 2012. If you are here at the conference, you can meet us here: Wed, Jun 27 10:15 AM - 11:30 AM – Room G106 DBI319 - BISM: Multidimensional vs. Tabular Wed, Jun 27 02:15 PM – 02:30 PM – Microsoft Press Booth in the TechExpo area PowerPivot for Excel 2010 Book Signing Thu, Jun 28 8:30 AM - 9:45 AM – Room E107 Many-to-Many Relationships in BISM Tabular Fri, Jun 29 1:00 PM - 2:45 PM – Breakthrough Insight at Microsoft SQL Server Booth – TechExpo area Staff and Q&A We’ll try to visit the Microsoft Booth very often and we’ll be in the area Breakthrough Insight of SQL Server zone (see the picture to identify it). And don’t miss the PowerPivot for Excel 2010 book signing event:

    Read the article

  • Meet @marcorus and @ferrarialberto at TechEd Europe 2012 #tee2012

    - by Marco Russo (SQLBI)
    I and Alberto are in Amsterdam this week at TechEd Europe 2012. If you are here at the conference, you can meet us here: Wed, Jun 27 10:15 AM - 11:30 AM – Room G106 DBI319 - BISM: Multidimensional vs. Tabular Wed, Jun 27 02:15 PM – 02:30 PM – Microsoft Press Booth in the TechExpo area PowerPivot for Excel 2010 Book Signing Thu, Jun 28 8:30 AM - 9:45 AM – Room E107 Many-to-Many Relationships in BISM Tabular Fri, Jun 29 1:00 PM - 2:45 PM – Breakthrough Insight at Microsoft SQL Server Booth – TechExpo area Staff and Q&A We’ll try to visit the Microsoft Booth very often and we’ll be in the area Breakthrough Insight of SQL Server zone (see the picture to identify it). And don’t miss the PowerPivot for Excel 2010 book signing event:

    Read the article

  • Map, Set use cases in a general web app

    - by user2541902
    I am currently working on my own Java web app (to be shown in interview to get a Java job). So I've not worked on Java in professional environment, so no guidance. I have database, entity classes, JPA relationships. Use cases are like, user has albums, album has pics, user has locations, location has co-ordinates etc. I used List (ArrayList) everywhere. I can do anything with List and DB, get some entry, find etc. For example, I will keep the list of users in List, then use queries to get some entry (why would I keep them in Map with id/email as key?). I know very well the working and features, implementing classes of Map, Set. I can use them for solving some algorithm, processing some data etc. In interviews, I get asked have you worked with these, where have you used them etc. So, Please tell me cases where they should be used (DB or any popular real use case).

    Read the article

  • Google I/O 2010 - The open & social web

    Google I/O 2010 - The open & social web Google I/O 2010 - The open & social web Social Web 101 Chris Messina This session will cover the latest and most important trends of the Social Web and dive deep into where this is all going, at both technical and conceptual levels. From the concepts of digital identity, relationships, and social objects, this session will cover emerging technologies like WebFinger, Salmon, ActivityStrea.ms, OpenID, OAuth and OpenSocial. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 4 0 ratings Time: 47:12 More in Science & Technology

    Read the article

< Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >