Search Results

Search found 12355 results on 495 pages for 'non nullable'.

Page 149/495 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • imported function name is not visible in entities context

    - by ali moharrami
    hi I am working on silverlight application which uses EF. I am able to retrieve the data. But I want to execute a stored procedure which returns no value. I tried using Import function. and the function is created in DataModel.Designer.cs : public int ClearWorkflow(Nullable<global::System.Guid> processId) { ObjectParameter processIdParameter; if (processId.HasValue) { processIdParameter = new ObjectParameter("ProcessId", processId); } else { processIdParameter = new ObjectParameter("ProcessId",typeof(global::System.Guid)); } return base.ExecuteFunction("ClearWorkflow", processIdParameter); } But the function name is not visible in entities context while accessing in silverlight.

    Read the article

  • Setting column length of a Long value with JPA annotations

    - by Gearóid
    Hi, I'm performing a little database optimisation at the moment and would like to set the column lengths in my table through JPA. So far I have no problem setting the String (varchar) lengths using JPA as follows: @Column(unique=true, nullable=false, length=99) public String getEmail() { return email; } However, when I want to do the same for a column which is of type Long (bigint), it doesn't work. For example, if I write: @Id @Column(length=7) @GeneratedValue(strategy = GenerationType.AUTO) public Long getId() { return id; } The column size is still set as the default of 20. Are we able to set these lengths in JPA or am I barking up the wrong tree? Thanks, Gearoid.

    Read the article

  • How to add a column via a query which counts the total rows with a specific criteria in a table with circular relationship in MS ACCESS 2007

    - by Xaqron
    I have a simple table "Employees" with this fields: ID, ParentID, Name ParentID is Nullable since an employee may have no Manager. This table has a one-to-many relationship with itself: ID --one--to--many--> ParentID Now I want a query which returns this columns: Name, Count of rows where their ParentID equals to the current row ID (the row is the manager of that rows) Sample Table: ID | ParentID | Name ====================== 1 | 0 | John ---------------------- 2 | 1 | Bob ---------------------- 3 | 1 | Alice ---------------------- 4 | 3 | Jack This way I can find an employee is the manager of how many other employees. The result should be something like this: Name | Count of Employees ========================== John | 2 -------------- Bob | 0 -------------- Alice | 1 -------------- Jack | 0 How can I achieve this in MS ACCESS 2007? * I have tried built-in query builder without any success.

    Read the article

  • LINQ-to-SQL Query Timing Out

    - by kevinw
    I'm running this query in LINQ: var unalloc = db.slot_sp_getUnallocatedJobs("Repair", RadComboBox1.SelectedValue, 20); It runs when I first open the page, but when I go back to it and try to run the same query with a different value, "Con", being passed through, the linq to sql designer.cs tells me that I've got a timeout error. Any ideas? Edit: This is what's in the designer: [Function(Name="dbo.slot_sp_getUnallocatedJobs")] Public ISingleResult<slot_sp_getUnallocatedJobsResult> slot_sp_getUnallocatedJobs([Parameter(Name="JobType", DbType="VarChar(20)")] string jobType, [Parameter(Name="Contract", DbType="VarChar(10)")] string contract, [Parameter(Name="Num", DbType="Int")] System.Nullable<int> num) { IExecuteResult result = this.ExecuteMethodCall(this, ((MethodInfo)(MethodInfo.GetCurrentMethod())), jobType, contract, num); return ((ISingleResult<slot_sp_getUnallocatedJobsResult>)(result.ReturnValue)); } } This is the error: SQLException was unhandled by user code Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.

    Read the article

  • versioning fails for onetomany collection holder

    - by Alexander Vasiljev
    given parent entity @Entity public class Expenditure implements Serializable { ... @OneToMany(mappedBy = "expenditure", cascade = CascadeType.ALL, orphanRemoval = true) @OrderBy() private List<ExpenditurePeriod> periods = new ArrayList<ExpenditurePeriod>(); @Version private Integer version = 0; ... } and child one @Entity public class ExpenditurePeriod implements Serializable { ... @ManyToOne @JoinColumn(name="expenditure_id", nullable = false) private Expenditure expenditure; ... } While updating both parent and child in one transaction, org.hibernate.StaleObjectStateException is thrown: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect): Indeed, hibernate issues two sql updates: one changing parent properties and another changing child properties. Do you know a way to get rid of parent update changing child? The update results both in inefficiency and false positive for optimistic lock. Note, that both child and parent save their state in DB correctly. Hibernate version is 3.5.1-Final

    Read the article

  • Big GRC: Turning Data into Actionable GRC Intelligence

    - by Jenna Danko
    While it’s no longer headline news that Governments have carried out large scale data-mining programmes aimed at terrorism detection and identifying other patterns of interest across a wide range of digital data sources, the debate over the ethics and justification over this action, will clearly continue for some time to come. What is becoming clear is that these programmes are a framework for the collation and aggregation of massive amounts of unstructured data and from this, the creation of actionable intelligence from analyses that allowed the analysts to explore and extract a variety of patterns and then direct resources. This data included audio and video chats, phone calls, photographs, e-mails, documents, internet searches, social media posts and mobile phone logs and connections. Although Governance, Risk and Compliance (GRC) professionals are not looking at the implementation of such programmes, there are many similar GRC “Big data” challenges to be faced and potential lessons to be learned from these high profile government programmes that can be applied a lot closer to home. For example, how can GRC professionals collect, manage and analyze an enormous and disparate volume of data to create and manage their own actionable intelligence covering hidden signs and patterns of criminal activity, the early or retrospective, violation of regulations/laws/corporate policies and procedures, emerging risks and weakening controls etc. Not exactly the stuff of James Bond to be sure, but it is certainly more applicable to most GRC professional’s day to day challenges. So what is Big Data and how can it benefit the GRC process? Although it often varies, the definition of Big Data largely refers to the following types of data: Traditional Enterprise Data – includes customer information from CRM systems, transactional ERP data, web store transactions, and general ledger data. Machine-Generated /Sensor Data – includes Call Detail Records (“CDR”), weblogs and trading systems data. Social Data – includes customer feedback streams, micro-blogging sites like Twitter, and social media platforms like Facebook. The McKinsey Global Institute estimates that data volume is growing 40% per year, and will grow 44x between 2009 and 2020. But while it’s often the most visible parameter, volume of data is not the only characteristic that matters. In fact, according to sources such as Forrester there are four key characteristics that define big data: Volume. Machine-generated data is produced in much larger quantities than non-traditional data. This is all the data generated by IT systems that power the enterprise. This includes live data from packaged and custom applications – for example, app servers, Web servers, databases, networks, virtual machines, telecom equipment, and much more. Velocity. Social media data streams – while not as massive as machine-generated data – produce a large influx of opinions and relationships valuable to customer relationship management as well as offering early insight into potential reputational risk issues. Even at 140 characters per tweet, the high velocity (or frequency) of Twitter data ensures large volumes (over 8 TB per day) need to be managed. Variety. Traditional data formats tend to be relatively well defined by a data schema and change slowly. In contrast, non-traditional data formats exhibit a dizzying rate of change. Without question, all GRC professionals work in a dynamic environment and as new services, new products, new business lines are added or new marketing campaigns executed for example, new data types are needed to capture the resultant information.  Value. The economic value of data varies significantly. Typically, there is good information hidden amongst a larger body of non-traditional data that GRC professionals can use to add real value to the organisation; the greater challenge is identifying what is valuable and then transforming and extracting that data for analysis and action. For example, customer service calls and emails have millions of useful data points and have long been a source of information to GRC professionals. Those calls and emails are critical in helping GRC professionals better identify hidden patterns and implement new policies that can reduce the amount of customer complaints.   Now on a scale and depth far beyond those in place today, all that unstructured call and email data can be captured, stored and analyzed to reveal the reasons for the contact, perhaps with the aggregated customer results cross referenced against what is being said about the organization or a similar peer organization on social media. The organization can then take positive actions, communicating to the market in advance of issues reaching the press, strengthening controls, adjusting risk profiles, changing policy and procedures and completely minimizing, if not eliminating, complaints and compensation for that specific reason in the future. In this one example of many similar ones, the GRC team(s) has demonstrated real and tangible business value. Big Challenges - Big Opportunities As pointed out by recent Forrester research, high performing companies (those that are growing 15% or more year-on-year compared to their peers) are taking a selective approach to investing in Big Data.  "Tomorrow's winners understand this, and they are making selective investments aimed at specific opportunities with tangible benefits where big data offers a more economical solution to meet a need." (Forrsights Strategy Spotlight: Business Intelligence and Big Data, Q4 2012) As pointed out earlier, with the ever increasing volume of regulatory demands and fines for getting it wrong, limited resource availability and out of date or inadequate GRC systems all contributing to a higher cost of compliance and/or higher risk profile than desired – a big data investment in GRC clearly falls into this category. However, to make the most of big data organizations must evolve both their business and IT procedures, processes, people and infrastructures to handle these new high-volume, high-velocity, high-variety sources of data and be able integrate them with the pre-existing company data to be analyzed. GRC big data clearly allows the organization access to and management over a huge amount of often very sensitive information that although can help create a more risk intelligent organization, also presents numerous data governance challenges, including regulatory compliance and information security. In addition to client and regulatory demands over better information security and data protection the sheer amount of information organizations deal with the need to quickly access, classify, protect and manage that information can quickly become a key issue  from a legal, as well as technical or operational standpoint. However, by making information governance processes a bigger part of everyday operations, organizations can make sure data remains readily available and protected. The Right GRC & Big Data Partnership Becomes Key  The "getting it right first time" mantra used in so many companies remains essential for any GRC team that is sponsoring, helping kick start, or even overseeing a big data project. To make a big data GRC initiative work and get the desired value, partnerships with companies, who have a long history of success in delivering successful GRC solutions as well as being at the very forefront of technology innovation, becomes key. Clearly solutions can be built in-house more cheaply than through vendor, but as has been proven time and time again, when it comes to self built solutions covering AML and Fraud for example, few have able to scale or adapt appropriately to meet the changing regulations or challenges that the GRC teams face on a daily basis. This has led to the creation of GRC silo’s that are causing so many headaches today. The solutions that stand out and should be explored are the ones that can seamlessly merge the traditional world of well-known data, analytics and visualization with the new world of seemingly innumerable data sources, utilizing Big Data technologies to generate new GRC insights right across the enterprise.Ultimately, Big Data is here to stay, and organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be the ones that are well positioned to make the most of it. A Blueprint and Roadmap Service for Big Data Big data adoption is first and foremost a business decision. As such it is essential that your partner can align your strategies, goals, and objectives with an architecture vision and roadmap to accelerate adoption of big data for your environment, as well as establish practical, effective governance that will maintain a well managed environment going forward. Key Activities: While your initiatives will clearly vary, there are some generic starting points the team and organization will need to complete: Clearly define your drivers, strategies, goals, objectives and requirements as it relates to big data Conduct a big data readiness and Information Architecture maturity assessment Develop future state big data architecture, including views across all relevant architecture domains; business, applications, information, and technology Provide initial guidance on big data candidate selection for migrations or implementation Develop a strategic roadmap and implementation plan that reflects a prioritization of initiatives based on business impact and technology dependency, and an incremental integration approach for evolving your current state to the target future state in a manner that represents the least amount of risk and impact of change on the business Provide recommendations for practical, effective Data Governance, Data Quality Management, and Information Lifecycle Management to maintain a well-managed environment Conduct an executive workshop with recommendations and next steps There is little debate that managing risk and data are the two biggest obstacles encountered by financial institutions.  Big data is here to stay and risk management certainly is not going anywhere, and ultimately financial services industry organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be best positioned to make the most of it. Matthew Long is a Financial Crime Specialist for Oracle Financial Services. He can be reached at matthew.long AT oracle.com.

    Read the article

  • Hibernate sequence should only generate when ID is <=0

    - by Tim Leys
    Hi all, I'm using the folowing sequence in my code. (I got the sequence and @Id @SequenceGenerator(name = "S912_PRO_SEQ", sequenceName = "S912_PRO_SEQ", allocationSize = 1) @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "S912_PRO_SEQ") @Column(name = "PRO_ID", unique = true, nullable = false, precision = 9, scale = 0) public int getId() { return this.id; } And using the following sequence / triggers in my DB. CREATE SEQUENCE S912_PRO_SEQ nomaxvalue minvalue 20; CREATE OR REPLACE TRIGGER S912_PRO_B_I_TRG BEFORE INSERT ON S912_project REFERENCING OLD AS OLD NEW AS NEW FOR EACH ROW ENABLE begin IF :NEW.pro_ID IS NULL THEN select S912_PRO_SEQ.nextval into :new.pro_ID from dual; END IF; end; I was wondering if there is a way to let hibernate generate a sequence ONLY if the ID is <=0 (not set) or if the ID already exists. I know for most cases my trigger would fix the situation. But I do not want to rely completely on it. I hope someone can help me out :p

    Read the article

  • CodePlex Daily Summary for Sunday, November 25, 2012

    CodePlex Daily Summary for Sunday, November 25, 2012Popular ReleasesMath.NET Numerics: Math.NET Numerics v2.3.0: Common: Continued major linear algebra storage rework, in this release focusing on vectors (previous release was on matrices) Static CreateRandom for all dense matrix and vector types Thin QR decomposition (in additin to existing full QR) Consistent static Sample methods for continuous and discrete distributions (was previously missing on a few) Portable build adds support for WP8 (.Net 4.0 and higher, SL5, WP8 and .NET for Windows Store apps) Various bug, performance and usability ...ExtJS based ASP.NET 2.0 Controls: FineUI v3.2.1: +2012-11-25 v3.2.1 +????????。 -MenuCheckBox?CheckedChanged??????,??????????。 -???????window.IDS??????????????。 -?????(??TabCollection,ControlBaseCollection)???,????????????????。 +Grid??。 -??SelectAllRows??。 -??PageItems??,?????????????,?????、??、?????。 -????grid/gridpageitems.aspx、grid/gridpageitemsrowexpander.aspx、grid/gridpageitems_pagesize.aspx。 -???????????????????。 -??ExpandAllRowExpanders??,?????????????????(grid/gridrowexpanderexpandall2.aspx)。 -??????ExpandRowExpande...VidCoder: 1.4.9 Beta: Updated HandBrake core to SVN 5079. Fixed crashes when encoding DVDs with title gaps.ZXing.Net: ZXing.Net 0.10.0.0: On the way to a release 1.0 the API should be stable now with this version. sync with rev. 2521 of the java version windows phone 8 assemblies improvements and fixesCharmBar: Windows 8 Charm Bar for Windows 7: Windows 8 Charm Bar for Windows 7BlackJumboDog: Ver5.7.3: 2012.11.24 Ver5.7.3 (1)SMTP???????、?????????、??????????????????????? (2)?????????、?????????????????????????? (3)DNS???????CNAME????CNAME????????????????? (4)DNS????????????TTL???????? (5)???????????????????????、?????????????????? (6)???????????????????????????????TEncoder: 3.1: -Added: Turkish translation (Translators, please see "To translators.txt") -Added: Profiles are now stored in different files under "Profiles" folder -Added: User created Profiles will be saved in a differen directory -Added: Custom video and audio options to profiles -Added: Container options to profiles -Added: Parent folder of input file will be created in the output folder -Added: Option to use 32bit FFmpeg eventhough the OS is 64bit -Added: New skin "Mint" -Fixed: FFMpeg could not open A...Liberty: v3.4.3.0 Release 23rd November 2012: Change Log -Added -H4 A dialog which gives further instructions when attempting to open a "Halo 4 Data" file -H4 Added a short note to the weapon editor stating that dropping your weapons will cap their ammo -Reach Edit the world's gravity -Reach Fine invincibility controls in the object editor -Reach Edit object velocity -Reach Change the teams of AI bipeds and vehicles -Reach Enable/disable fall damage on the biped editor screen -Reach Make AIs deaf and/or blind in the objec...Umbraco CMS: Umbraco 4.11.0: NugetNuGet BlogRead the release blog post for 4.11.0. Whats new50 bugfixes (see the issue tracker for a complete list) Read the documentation for the MVC bits. Breaking changesGetPropertyValue now returns an object, not a string (only affects upgrades from 4.10.x to 4.11.0) NoteIf you need Courier use the release candidate (as of build 26). The code editor has been greatly improved, but is sometimes problematic in Internet Explorer 9 and lower. Previously it was just disabled for IE and...Audio Pitch & Shift: Audio Pitch And Shift 5.1.0.3: Fixed supported files list on open dialog (added .pls and .m3u) Impulse Media Player splash message (can be disabled anyway)WiX Toolset: WiX v3.7 RC: WiX v3.7 RC (3.7.1119.0) provides feature complete Bundle update and reference tracking plus several bug fixes. For more information see Rob's blog post about the release: http://robmensching.com/blog/posts/2012/11/20/WiX-v3.7-Release-Candidate-availablePicturethrill: Version 2.11.20.0: Fixed up Bing image provider on Windows 8Excel AddIn to reset the last worksheet cell: XSFormatCleaner.xla: Modified the commandbar code to use CommandBar IDs instead of English names.Json.NET: Json.NET 4.5 Release 11: New feature - Added ITraceWriter, MemoryTraceWriter, DiagnosticsTraceWriter New feature - Added StringEscapeHandling with options to escape HTML and non-ASCII characters New feature - Added non-generic JToken.ToObject methods New feature - Deserialize ISet<T> properties as HashSet<T> New feature - Added implicit conversions for Uri, TimeSpan, Guid New feature - Missing byte, char, Guid, TimeSpan and Uri explicit conversion operators added to JToken New feature - Special case...HigLabo: HigLabo_20121119: HigLabo_2012111 --HigLabo.Mail-- Modify bug fix of ExecuteAppend method. Add ExecuteXList method to ImapClient class. --HigLabo.Net.WindowsLive-- Add AsyncCall to WindowsLiveClient class.SharePoint CAML Extensions: Version 1.1: Beta version! <Membership>, <Today/>, <Now/> and <UserID /> tags are not supported!mojoPortal: 2.3.9.4: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2394-released Note that we have separate deployment packages for .NET 3.5 and .NET 4.0, but we recommend you to use .NET 4, we will probably drop support for .NET 3.5 once .NET 4.5 is available The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code and are not intended for use in Visual Studio. To download the source code see getting the lates...Keleyi: keleyi-1.1.0: ????: .NET Framework 4.0 ??:MD5??,?????IP??(??????),????? www.keleyi.com keleyi.codeplex.comHoliday Calendar: Calendar Control: Month navigation introduced.NDateTime: Version 1.0: This is the first releaseNew Projects.NET vFaceWall: .NET vFaceWall is autility script written in asp.net which allows developer to build next generation facebook wall style user profiles for asp.net websites.29th Infantry Division Engineer Corps: Code repository and project management hub for the 29th Infantry Division's Engineer Corps.A.M. Lost: A.M. Lost is game project developed during the GameDevParty Jam 3 event (23-24-25 november 2012).ABAP BLOG MIKE: ABAP blog AgileNETSlayer: Agile.Net Deobfuscator, supports all obfuscation methods.BERP Games: This project site contains XNA game development concepts and software produced by BERP Games.CFileUpload: This project will let you upload files with progress bar, without using Flash, HTML5 or similar technologies that the user's browser might not have. CharmBar: Windows 8 Charm Bar is a application that works just like the Windows 8 CharmBar. The word charmbar, the Windows 8 Logo are trademarks of Microsoft Corporation.Confidentialité des données: Développer une application permettant de faciliter la mise en œuvre, l’utilisation et la gestion de ces outils, en exploitant les API .NET.End of control: Project has not been started yet. Just absorbing attentions.Enhanced XML Search: Easy and fastest methods to manipulate XML Data.Exchange Automatic Replies Administrator: Exchange Automatic Replies Administrator is a PowerShell GUI tool for Helpdesk and Sysadmins to set a users' Out-Of-Office without using the command line.File Explorer for WPF: FileExplorer is a WPF control that's emulate the Windows Explorer, it supports both FIleSystem and non-FileSystem class.ForcePlot: Using brute force, plots any difficult equation even some popular software cannot plot. Currently supports 2D graphs, trigonometric and logarithmic functions.Game Dev: Currently in ideas phaseGroupEmulationEMK11: Discussion and sharing of member data Group Emulation Engineering Mechanical 2011Jenkins CI: Views, Jobs, Build status and color. mvcMusicOnLine: mvcMusicOnLinemy own site project no 1: still testing...Nauplius.KeyStore: Provides secure application key storage backed by SQL 2008 and Active Directory.Noctl Library: Noctl is a C# multiplatform Library.open Archive Mediator: High-performance middle-ware for process historiansOpen Source Game - Prince's Revenge: Open Source Game.PackToKindle: Project allow to send folder content to your kindle. Functionality is the same as the official SendToKindle application, but this project is designed to use froPunkPong: PunkPong is an open source "Pong" alike game totally written in DHTML (JavaScript, CSS and HTML) that uses keyboard or mouse. This cross-platform and cross-browser game was tested under BeOS, Linux, NetBSD, OpenBSD, FreeBSD, Windows and others.Quick Data Processor: A simple and quick data processor for csv files.SFProject: It'll be a game at some pointsilverlight ? flash? policy ??: silverlight? flash? policy socket server? ??Universe.WCF.Behaviors: Universe.WCF.Behaviors provides behaviors: - Easy migration from Remoting - Transparent delivery - Traffic Statistics - WCF Streaming Adaptor for BinaryWriter and TextWriter - etc WowDotNetAPI for Silverlight: Silverlight class library implementation of Briam Ramos World Of Warcraft WowDotNetAPI .NET library on GitHub ( https://github.com/briandek/WowDotNetAPI ) C# .Net library to access the new World of Warcraft Community Platform API.WP7NUMConvert: A really simple project to create a small library for number conversions in multiple formats. Written in VB for Windows PhoneWPF CodeEditor: The Dev Tools project is intended to contain a set of features, tools & controls useful for development purposes.

    Read the article

  • Ambiguous Generic restriction T:class vs T:struct

    - by Maslow
    This code generates a compiler error that the member is already defined with the same parameter types. private T GetProperty<T>(Func<Settings, T> GetFunc) where T:class { try { return GetFunc(Properties.Settings.Default); } catch (Exception exception) { SettingReadException(this,exception); return null; } } private TNullable? GetProperty<TNullable>(Func<Settings, TNullable> GetFunc) where TNullable : struct { try { return GetFunc(Properties.Settings.Default); } catch (Exception ex) { SettingReadException(this, ex); return new Nullable<TNullable>(); } } Is there a clean work around?

    Read the article

  • Entity Framework doesn't like 0..1 to * relationships.

    - by Orion Adrian
    I have a database framework where I have two tables. The first table has a single column that is an identity and primary key. The second table contains two columns. One is a nvarchar primary key and the other is a nullable foreign key to the first table. On the default import of the database I get the following error: Condition cannot be specified for Column member 'ForeignKeyId' because it is marked with a 'Computed' or 'Identity' StoreGeneratedPattern. where ForeignKeyId is the second foreign key reference in the second table. Is this just something the entity model doesn't do? Or am I missing something?

    Read the article

  • Hibernate: same generated value in two properties

    - by Markos Fragkakis
    Hi, I have an entity A with fields: aId (the system id) bId I want the first to be generated: @Id @Column(name = "PRODUCT_ID", unique = true, nullable = false, precision = 12, scale = 0) @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "PROD_GEN") @BusinessKey public Long getAId() { return this.aId; } I want the bId to be initially exactly as the aId. One approach is to insert the entity, then get the aId generated by the DB (2nd query) and then update the entity, setting the bId to be equal to aId (3rd query). Is there a way to get the bId to get the same generated value as aId? Note that afterwards, I want to be able to update bId from my gui. If the solution is JPA, even better.

    Read the article

  • Linq to sql Incorrect varchar length

    - by scott
    I have a table with a nullable varchar(50) column in it. When I am updating the value through linq to sql and trace the call in profiler it is defining the parameter as varchar(36). This is obviously causing some minor issues when we are trying to insert data that is between 37 and 50 characters long. I have tried removing the table and re-adding it to the design surface but the same thing happens. I also tried removing that property and adding it manually, same issue. When I look at the designer.cs code it shows the attribute properly: [Column(Storage="_Name", DbType="VarChar(50)")] I am out of ideas, anybody seen this before? Every other column is correct.

    Read the article

  • Of transactions and Mongo

    - by Nuri Halperin
    Originally posted on: http://geekswithblogs.net/nuri/archive/2014/05/20/of-transactions-and-mongo-again.aspxWhat's the first thing you hear about NoSQL databases? That they lose your data? That there's no transactions? No joins? No hope for "real" applications? Well, you *should* be wondering whether a certain of database is the right one for your job. But if you do so, you should be wondering that about "traditional" databases as well! In the spirit of exploration let's take a look at a common challenge: You are a bank. You have customers with accounts. Customer A wants to pay B. You want to allow that only if A can cover the amount being transferred. Let's looks at the problem without any context of any database engine in mind. What would you do? How would you ensure that the amount transfer is done "properly"? Would you prevent a "transaction" from taking place unless A can cover the amount? There are several options: Prevent any change to A's account while the transfer is taking place. That boils down to locking. Apply the change, and allow A's balance to go below zero. Charge person A some interest on the negative balance. Not friendly, but certainly a choice. Don't do either. Options 1 and 2 are difficult to attain in the NoSQL world. Mongo won't save you headaches here either. Option 3 looks a bit harsh. But here's where this can go: ledger. See, and account doesn't need to be represented by a single row in a table of all accounts with only the current balance on it. More often than not, accounting systems use ledgers. And entries in ledgers - as it turns out – don't actually get updated. Once a ledger entry is written, it is not removed or altered. A transaction is represented by an entry in the ledger stating and amount withdrawn from A's account and an entry in the ledger stating an addition of said amount to B's account. For sake of space-saving, that entry in the ledger can happen using one entry. Think {Timestamp, FromAccountId, ToAccountId, Amount}. The implication of the original question – "how do you enforce non-negative balance rule" then boils down to: Insert entry in ledger Run validation of recent entries Insert reverse entry to roll back transaction if validation failed. What is validation? Sum up the transactions that A's account has (all deposits and debits), and ensure the balance is positive. For sake of efficiency, one can roll up transactions and "close the book" on transactions with a pseudo entry stating balance as of midnight or something. This lets you avoid doing math on the fly on too many transactions. You simply run from the latest "approved balance" marker to date. But that's an optimization, and premature optimizations are the root of (some? most?) evil.. Back to some nagging questions though: "But mongo is only eventually consistent!" Well, yes, kind of. It's not actually true that Mongo has not transactions. It would be more descriptive to say that Mongo's transaction scope is a single document in a single collection. A write to a Mongo document happens completely or not at all. So although it is true that you can't update more than one documents "at the same time" under a "transaction" umbrella as an atomic update, it is NOT true that there' is no isolation. So a competition between two concurrent updates is completely coherent and the writes will be serialized. They will not scribble on the same document at the same time. In our case - in choosing a ledger approach - we're not even trying to "update" a document, we're simply adding a document to a collection. So there goes the "no transaction" issue. Now let's turn our attention to consistency. What you should know about mongo is that at any given moment, only on member of a replica set is writable. This means that the writable instance in a set of replicated instances always has "the truth". There could be a replication lag such that a reader going to one of the replicas still sees "old" state of a collection or document. But in our ledger case, things fall nicely into place: Run your validation against the writable instance. It is guaranteed to have a ledger either with (after) or without (before) the ledger entry got written. No funky states. Again, the ledger writing *adds* a document, so there's no inconsistent document state to be had either way. Next, we might worry about data loss. Here, mongo offers several write-concerns. Write-concern in Mongo is a mode that marshals how uptight you want the db engine to be about actually persisting a document write to disk before it reports to the application that it is "done". The most volatile, is to say you don't care. In that case, mongo would just accept your write command and say back "thanks" with no guarantee of persistence. If the server loses power at the wrong moment, it may have said "ok" but actually no written the data to disk. That's kind of bad. Don't do that with data you care about. It may be good for votes on a pole regarding how cute a furry animal is, but not so good for business. There are several other write-concerns varying from flushing the write to the disk of the writable instance, flushing to disk on several members of the replica set, a majority of the replica set or all of the members of a replica set. The former choice is the quickest, as no network coordination is required besides the main writable instance. The others impose extra network and time cost. Depending on your tolerance for latency and read-lag, you will face a choice of what works for you. It's really important to understand that no data loss occurs once a document is flushed to an instance. The record is on disk at that point. From that point on, backup strategies and disaster recovery are your worry, not loss of power to the writable machine. This scenario is not different from a relational database at that point. Where does this leave us? Oh, yes. Eventual consistency. By now, we ensured that the "source of truth" instance has the correct data, persisted and coherent. But because of lag, the app may have gone to the writable instance, performed the update and then gone to a replica and looked at the ledger there before the transaction replicated. Here are 2 options to deal with this. Similar to write concerns, mongo support read preferences. An app may choose to read only from the writable instance. This is not an awesome choice to make for every ready, because it just burdens the one instance, and doesn't make use of the other read-only servers. But this choice can be made on a query by query basis. So for the app that our person A is using, we can have person A issue the transfer command to B, and then if that same app is going to immediately as "are we there yet?" we'll query that same writable instance. But B and anyone else in the world can just chill and read from the read-only instance. They have no basis to expect that the ledger has just been written to. So as far as they know, the transaction hasn't happened until they see it appear later. We can further relax the demand by creating application UI that reacts to a write command with "thank you, we will post it shortly" instead of "thank you, we just did everything and here's the new balance". This is a very powerful thing. UI design for highly scalable systems can't insist that the all databases be locked just to paint an "all done" on screen. People understand. They were trained by many online businesses already that your placing of an order does not mean that your product is already outside your door waiting (yes, I know, large retailers are working on it... but were' not there yet). The second thing we can do, is add some artificial delay to a transaction's visibility on the ledger. The way that works is simply adding some logic such that the query against the ledger never nets a transaction for customers newer than say 15 minutes and who's validation flag is not set. This buys us time 2 ways: Replication can catch up to all instances by then, and validation rules can run and determine if this transaction should be "negated" with a compensating transaction. In case we do need to "roll back" the transaction, the backend system can place the timestamp of the compensating transaction at the exact same time or 1ms after the original one. Effectively, once A or B visits their ledger, both transactions would be visible and the overall balance "as of now" would reflect no change.  The 2 transactions (attempted/ reverted) would be visible , since we do actually account for the attempt. Hold on a second. There's a hole in the story: what if several transfers from A to some accounts are registered, and 2 independent validators attempt to compute the balance concurrently? Is there a chance that both would conclude non-sufficient-funds even though rolling back transaction 100 would free up enough for transaction 117 (some random later transaction)? Yes. there is that chance. But the integrity of the business rule is not compromised, since the prime rule is don't dispense money you don't have. To minimize or eliminate this scenario, we can also assign a single validation process per origin account. This may seem non-scalable, but it can easily be done as a "sharded" distribution. Say we have 11 validation threads (or processing nodes etc.). We divide the account number space such that each validator is exclusively responsible for a certain range of account numbers. Sounds cunningly similar to Mongo's sharding strategy, doesn't it? Each validator then works in isolation. More capacity needed? Chop the account space into more chunks. So where  are we now with the nagging questions? "No joins": Huh? What are those for? "No transactions": You mean no cross-collection and no cross-document transactions? Granted - but don't always need them either. "No hope for real applications": well... There are more issues and edge cases to slog through, I'm sure. But hopefully this gives you some ideas of how to solve common problems without distributed locking and relational databases. But then again, you can choose relational databases if they suit your problem.

    Read the article

  • Does a System.DirectoryServices.AccountManagement.Principal ever have a null GUID?

    - by Josh
    I have a situation where I need to store a globally unique identifier that points to an Active Directory user account. I'm leaning towards the Guid because it is easier to store than the Sid. According to the MSDN entry, the property (which is a Nullable), will always return null if the ContextType is set to "Machine." I don't need to worry about this because our ContextType will always be set to "Domain." My question is, will this property ever return null if the ContextType is "Domain"? In other words, will an account in an AD DS store always have a Guid?

    Read the article

  • SQL - Two foreign keys that have a dependency between them

    - by Brian
    The current structure is as follows: Table RowType: RowTypeID Table RowSubType: RowSubTypeID FK_RowTypeID Table ColumnDef: FK_RowTypeID FK_RowSubTypeID (nullable) In short, I'm mapping column definitions to rows. In some cases, those rows have subtype(s), which will have column definitions specific to them. Alternatively, I could hang those column definitions that are specific to subtypes off their own table, or I could combine the data in RowType and RowSubType into one table and work with a single ID, but I'm not sure either is a better solution (if anything, I'd lean towards the latter, as we mostly end up pulling ColumnDefs for a given RowType/RowSubType). Is the current design SQL Blasphemy? If I keep the current structuree, how do I maintain that if RowSubTypeID is specified in ColumnDef, that it must correspond to the RowType specified by RowTypeID? Should I try to enforce this with a trigger or am I missing a simple redesign that would solve the problem?

    Read the article

  • Lucene.NET - Find documents that do not contain a specified field

    - by Brandon
    Let's say I have 2 instance of a class called 'Animal'. Animal has 3 fields: Name, Age, and Type The name field is nullable, so before I insert an instance of Animal as a Lucene indexed document, I check if Animal.Name == null, and if it does, I do not insert it as a field in my document. If I were to retrieve all animals, I would see that the Name field does not exist and I can set its value to null. However, there may be situations where I want to say "Get me all animals that do not have a name specified yet." In this situation I want to retrieve all Lucene.NET documents from my animal index that do not contain the Name field. Is there an easy way to do this with Lucene.NET? I want to stay away from having to perform some sort of hack to check if my name field has a value of 'null'.

    Read the article

  • How to create a view of table that contains a timestamp column?

    - by Matt Faus
    This question is an extension of a previous one I have asked. I have a table (2014_05_31_transformed.Video) with a schema that looks like this. I have put up the JSON returned by the BigQuery API describing it's schema in this gist. I am trying to create a view against this table with an API call that looks like this: { 'view': { 'query': u 'SELECT deleted_mod_time FROM [2014_05_31_transformed.Video]' }, 'tableReference': { 'datasetId': 'latest_transformed', 'tableId': u 'Video', 'projectId': 'redacted' } } But, the BigQuery API is returning this error: HttpError: https://www.googleapis.com/bigquery/v2/projects/124072386181/datasets/latest_transformed/tables?alt=json returned "Invalid field name "deleted_mod_time.usec". Fields must contain only letters, numbers, and underscores, start with a letter or underscore, and be at most 128 characters long." The schema that the BigQuery API does not make any distinction between a TIMESTAMP data type and a regular nullable INTEGER data type, so I can't think of a way to programmatically correct this problem. Is there anything I can do, or is this a bug with BigQuery's view implementation?

    Read the article

  • How to handle JPA annotations for a pointer to a generic interface

    - by HDave
    I have a generic class that is also a mapped super class that has a private field that holds a pointer to another object of the same type: @MappedSuperclass public abstract class MyClass<T extends MyIfc<T>> implements MyIfc<T> { @OneToOne() @JoinColumn(name = "previous", nullable = true) private T previous; ... } My problem is that Eclipse is showing an error in the file at the OneToOne "Target Entity "T" for previous is not an Entity." All of the implementations of MyIfc are, in fact, Entities. I should also add that each concrete implementation that inherit from MyClass uses a different value for T (because T is itself) so I can't use the "targetEntity" attribute. I guess if there is no answer then I'll have to move this JPA annotation to all the concrete subclasses of MyClass. It just seems like JPA/Hibernate should be smart enough to know it'll all work out at run-time. Makes me wonder if I should just ignore this error somehow.

    Read the article

  • Taming Hopping Windows

    - by Roman Schindlauer
    At first glance, hopping windows seem fairly innocuous and obvious. They organize events into windows with a simple periodic definition: the windows have some duration d (e.g. a window covers 5 second time intervals), an interval or period p (e.g. a new window starts every 2 seconds) and an alignment a (e.g. one of those windows starts at 12:00 PM on March 15, 2012 UTC). var wins = xs     .HoppingWindow(TimeSpan.FromSeconds(5),                    TimeSpan.FromSeconds(2),                    new DateTime(2012, 3, 15, 12, 0, 0, DateTimeKind.Utc)); Logically, there is a window with start time a + np and end time a + np + d for every integer n. That’s a lot of windows. So why doesn’t the following query (always) blow up? var query = wins.Select(win => win.Count()); A few users have asked why StreamInsight doesn’t produce output for empty windows. Primarily it’s because there is an infinite number of empty windows! (Actually, StreamInsight uses DateTimeOffset.MaxValue to approximate “the end of time” and DateTimeOffset.MinValue to approximate “the beginning of time”, so the number of windows is lower in practice.) That was the good news. Now the bad news. Events also have duration. Consider the following simple input: var xs = this.Application                 .DefineEnumerable(() => new[]                     { EdgeEvent.CreateStart(DateTimeOffset.UtcNow, 0) })                 .ToStreamable(AdvanceTimeSettings.IncreasingStartTime); Because the event has no explicit end edge, it lasts until the end of time. So there are lots of non-empty windows if we apply a hopping window to that single event! For this reason, we need to be careful with hopping window queries in StreamInsight. Or we can switch to a custom implementation of hopping windows that doesn’t suffer from this shortcoming. The alternate window implementation produces output only when the input changes. We start by breaking up the timeline into non-overlapping intervals assigned to each window. In figure 1, six hopping windows (“Windows”) are assigned to six intervals (“Assignments”) in the timeline. Next we take input events (“Events”) and alter their lifetimes (“Altered Events”) so that they cover the intervals of the windows they intersect. In figure 1, you can see that the first event e1 intersects windows w1 and w2 so it is adjusted to cover assignments a1 and a2. Finally, we can use snapshot windows (“Snapshots”) to produce output for the hopping windows. Notice however that instead of having six windows generating output, we have only four. The first and second snapshots correspond to the first and second hopping windows. The remaining snapshots however cover two hopping windows each! While in this example we saved only two events, the savings can be more significant when the ratio of event duration to window duration is higher. Figure 1: Timeline The implementation of this strategy is straightforward. We need to set the start times of events to the start time of the interval assigned to the earliest window including the start time. Similarly, we need to modify the end times of events to the end time of the interval assigned to the latest window including the end time. The following snap-to-boundary function that rounds a timestamp value t down to the nearest value t' <= t such that t' is a + np for some integer n will be useful. For convenience, we will represent both DateTime and TimeSpan values using long ticks: static long SnapToBoundary(long t, long a, long p) {     return t - ((t - a) % p) - (t > a ? 0L : p); } How do we find the earliest window including the start time for an event? It’s the window following the last window that does not include the start time assuming that there are no gaps in the windows (i.e. duration < interval), and limitation of this solution. To find the end time of that antecedent window, we need to know the alignment of window ends: long e = a + (d % p); Using the window end alignment, we are finally ready to describe the start time selector: static long AdjustStartTime(long t, long e, long p) {     return SnapToBoundary(t, e, p) + p; } To find the latest window including the end time for an event, we look for the last window start time (non-inclusive): public static long AdjustEndTime(long t, long a, long d, long p) {     return SnapToBoundary(t - 1, a, p) + p + d; } Bringing it together, we can define the translation from events to ‘altered events’ as in Figure 1: public static IQStreamable<T> SnapToWindowIntervals<T>(IQStreamable<T> source, TimeSpan duration, TimeSpan interval, DateTime alignment) {     if (source == null) throw new ArgumentNullException("source");     // reason about DateTime and TimeSpan in ticks     long d = Math.Min(DateTime.MaxValue.Ticks, duration.Ticks);     long p = Math.Min(DateTime.MaxValue.Ticks, Math.Abs(interval.Ticks));     // set alignment to earliest possible window     var a = alignment.ToUniversalTime().Ticks % p;     // verify constraints of this solution     if (d <= 0L) { throw new ArgumentOutOfRangeException("duration"); }     if (p == 0L || p > d) { throw new ArgumentOutOfRangeException("interval"); }     // find the alignment of window ends     long e = a + (d % p);     return source.AlterEventLifetime(         evt => ToDateTime(AdjustStartTime(evt.StartTime.ToUniversalTime().Ticks, e, p)),         evt => ToDateTime(AdjustEndTime(evt.EndTime.ToUniversalTime().Ticks, a, d, p)) -             ToDateTime(AdjustStartTime(evt.StartTime.ToUniversalTime().Ticks, e, p))); } public static DateTime ToDateTime(long ticks) {     // just snap to min or max value rather than under/overflowing     return ticks < DateTime.MinValue.Ticks         ? new DateTime(DateTime.MinValue.Ticks, DateTimeKind.Utc)         : ticks > DateTime.MaxValue.Ticks         ? new DateTime(DateTime.MaxValue.Ticks, DateTimeKind.Utc)         : new DateTime(ticks, DateTimeKind.Utc); } Finally, we can describe our custom hopping window operator: public static IQWindowedStreamable<T> HoppingWindow2<T>(     IQStreamable<T> source,     TimeSpan duration,     TimeSpan interval,     DateTime alignment) {     if (source == null) { throw new ArgumentNullException("source"); }     return SnapToWindowIntervals(source, duration, interval, alignment).SnapshotWindow(); } By switching from HoppingWindow to HoppingWindow2 in the following example, the query returns quickly rather than gobbling resources and ultimately failing! public void Main() {     var start = new DateTimeOffset(new DateTime(2012, 6, 28), TimeSpan.Zero);     var duration = TimeSpan.FromSeconds(5);     var interval = TimeSpan.FromSeconds(2);     var alignment = new DateTime(2012, 3, 15, 12, 0, 0, DateTimeKind.Utc);     var events = this.Application.DefineEnumerable(() => new[]     {         EdgeEvent.CreateStart(start.AddSeconds(0), "e0"),         EdgeEvent.CreateStart(start.AddSeconds(1), "e1"),         EdgeEvent.CreateEnd(start.AddSeconds(1), start.AddSeconds(2), "e1"),         EdgeEvent.CreateStart(start.AddSeconds(3), "e2"),         EdgeEvent.CreateStart(start.AddSeconds(9), "e3"),         EdgeEvent.CreateEnd(start.AddSeconds(3), start.AddSeconds(10), "e2"),         EdgeEvent.CreateEnd(start.AddSeconds(9), start.AddSeconds(10), "e3"),     }).ToStreamable(AdvanceTimeSettings.IncreasingStartTime);     var adjustedEvents = SnapToWindowIntervals(events, duration, interval, alignment);     var query = from win in HoppingWindow2(events, duration, interval, alignment)                 select win.Count();     DisplayResults(adjustedEvents, "Adjusted Events");     DisplayResults(query, "Query"); } As you can see, instead of producing a massive number of windows for the open start edge e0, a single window is emitted from 12:00:15 AM until the end of time: Adjusted Events StartTime EndTime Payload 6/28/2012 12:00:01 AM 12/31/9999 11:59:59 PM e0 6/28/2012 12:00:03 AM 6/28/2012 12:00:07 AM e1 6/28/2012 12:00:05 AM 6/28/2012 12:00:15 AM e2 6/28/2012 12:00:11 AM 6/28/2012 12:00:15 AM e3 Query StartTime EndTime Payload 6/28/2012 12:00:01 AM 6/28/2012 12:00:03 AM 1 6/28/2012 12:00:03 AM 6/28/2012 12:00:05 AM 2 6/28/2012 12:00:05 AM 6/28/2012 12:00:07 AM 3 6/28/2012 12:00:07 AM 6/28/2012 12:00:11 AM 2 6/28/2012 12:00:11 AM 6/28/2012 12:00:15 AM 3 6/28/2012 12:00:15 AM 12/31/9999 11:59:59 PM 1 Regards, The StreamInsight Team

    Read the article

  • LINQ - Select Statement - The null value cannot be assigned to a member with type System.Int32 which

    - by thiag0
    I am trying to achieve the following... _4S.NJB_Request request = (from r in db.NJB_Requests where r.RequestId == referenceId select r).Take(1).SingleOrDefault(); Getting the following exception... Message: The null value cannot be assigned to a member with type System.Int32 which is a non-nullable value type. StackTrace: at System.Data.Linq.SqlClient.SqlProvider.Execute(Expression query, QueryInfo queryInfo, IObjectReaderFactory factory, Object[] parentArgs, Object[] userArgs, ICompiledSubQuery[] subQueries, Object lastResult) at System.Data.Linq.SqlClient.SqlProvider.ExecuteAll(Expression query, QueryInfo[] queryInfos, IObjectReaderFactory factory, Object[] userArguments, ICompiledSubQuery[] subQueries) at System.Data.Linq.SqlClient.SqlProvider.System.Data.Linq.Provider.IProvider.Execute(Expression query) at System.Data.Linq.DataQuery`1.System.Linq.IQueryProvider.Execute[S](Expression expression) at System.Linq.Queryable.SingleOrDefault[TSource](IQueryable`1 source) at DAL.SqlDataProvider.MarkNJBPCRequestAsComplete(Int32 referenceId, Int32 processState) I have verified that 'referenceId' does have a value. Anyone know why this would happen in a select statement? Thanks!

    Read the article

  • Java: JPA classes, refactoring from Date to DateTime

    - by bguiz
    With a table created using this SQL Create Table X ( ID varchar(4) Not Null, XDATE date ); and an entity class defined like so @Entity @Table(name = "X") public class X implements Serializable { @Id @Basic(optional = false) @Column(name = "ID", nullable = false, length = 4) private String id; @Column(name = "XDATE") @Temporal(TemporalType.DATE) private Date xDate; //java.util.Date ... } With the above, I can use JPA to achieve object relational mapping. However, the xDate attribute can only store dates, e.g. dd/MM/yyyy. How do I refactor the above to store a full date object using just one field, i.e. dd/MM/yyyy HH24:mm?

    Read the article

  • How to set up precision attribute used by @Collumn annotation ???

    - by Arthur Ronald F D Garcia
    I often use java.lang.Integer as primary key. Here you can see some piece of code @Entity private class Person { private Integer id; @Id @Column(precision=8, nullable=false) public Integer getId() { } } I need to set up its precision attribute value equal to 8. But, when exporting The schema (Oracle), it does not work as expected. AnnotationConfiguration configuration = new AnnotationConfiguration(); configuration .addAnnotatedClass(Person.class) .setProperty(Environment.DIALECT, "org.hibernate.dialect.OracleDialect") .setProperty(Environment.DRIVER, "oracle.jdbc.driver.OracleDriver"); SchemaExport schema = new SchemaExport(configuration); schema.setOutputFile("schema.sql"); schema.create(true, false); schema.sql outputs create table Person (id number(10,0) not null) Always i get 10. Is There some workaround to get 8 instead of 10 ?

    Read the article

  • c# object initializer complexity. best practice

    - by Andrew Florko
    I was too excited when object initializer appeared in C#. MyClass a = new MyClass(); a.Field1 = Value1; a.Field2 = Value2; can be rewritten shorter: MyClass a = new MyClass { Field1 = Value1, Field2 = Value2 } Object initializer code is more obvious but when properties number come to dozen and some of the assignment deals with nullable values it's hard to debug where the "null reference error" is. Studio shows the whole object initializer as error point. Nowadays I use object initializer for straightforward assignment only for error-free properties. How do you use object initializer for complex assignment or it's a bad practice to use dozen of assigments at all? Thank you in advance!

    Read the article

  • Grails: Querying Associations causes groovy.lang.MissingMethodException

    - by Paul
    Hi, I've got an issue with Grails where I have a test app with: class Artist { static constraints = { name() } static hasMany = [albums:Album] String name } class Album { static constraints = { name() } static hasMany = [ tracks : Track ] static belongsTo = [artist: Artist] String name } class Track { static constraints = { name() lyrics(nullable: true) } Lyrics lyrics static belongsTo = [album: Album] String name } The following query (and a more advanced, nested association query) works in the Grails Console but fails with a groovy.lang.MissingMethodException when running the app with 'run-app': def albumCriteria = tunehub.Album.createCriteria() def albumResults = albumCriteria.list { like("name", receivedAlbum) artist { like("name", receivedArtist) } // Fails here maxResults(1) } Stacktrace: groovy.lang.MissingMethodException: No signature of method: java.lang.String.call() is applicable for argument types: (tunehub.LyricsService$_getLyrics_closure1_closure2) values: [tunehub.LyricsService$_getLyrics_closure1_closure2@604106] Possible solutions: wait(), any(), wait(long), each(groovy.lang.Closure), any(groovy.lang.Closure), trim() at tunehub.LyricsService$_getLyrics_closure1.doCall(LyricsService.groovy:61) at tunehub.LyricsService$_getLyrics_closure1.doCall(LyricsService.groovy) (...truncated...) Any pointers?

    Read the article

  • Ado.Net Entity produces "namespace cannot be found"

    - by Dave
    I've seen several possible solutions to this, but none have worked for me. After adding a ADO.NET Entity Data Model to my .Net Forms C# web project, I am unable to use it. Perhaps I made a mistake adding it? The name of the file added is QcFormData.edmx. In my code, perhaps I'm instantiating it incorrectly? I tried adding the line: QcFormDataContainer db = new QcFormDataContainer(); It appears in Intellisense, but when compiling I get the error : Error 13 The type or namespace name 'QcFormDataContainer' could not be found (are you missing a using directive or an assembly reference?) I've followed the suggestions that I found online that did not help: 1) made sure there is "using System.Data.Entity" 2) made sure the dll exists. 3) made sure the reference exists. 4) one post said use using System.Web.Data.Entity; but I do not see that available. What am I missing? QcFormData.edmx <?xml version="1.0" encoding="utf-8"?> <edmx:Edmx Version="3.0" xmlns:edmx="http://schemas.microsoft.com/ado/2009/11/edmx"> <!-- EF Runtime content --> <edmx:Runtime> <!-- SSDL content --> <edmx:StorageModels> <Schema Namespace="MyCocoModel.Store" Alias="Self" Provider="System.Data.SqlClient" ProviderManifestToken="2008" xmlns:store="http://schemas.microsoft.com/ado/2007/12/edm/EntityStoreSchemaGenerator" xmlns="http://schemas.microsoft.com/ado/2009/11/edm/ssdl"> <EntityContainer Name="MyCocoModelStoreContainer"> <EntitySet Name="QcFieldValues" EntityType="MyCocoModel.Store.QcFieldValues" store:Type="Tables" Schema="dbo" /> </EntityContainer> <EntityType Name="QcFieldValues"> <Key> <PropertyRef Name="ID" /> </Key> <Property Name="ID" Type="int" Nullable="false" StoreGeneratedPattern="Identity" /> <Property Name="FieldID" Type="nvarchar" MaxLength="100" /> <Property Name="FieldValue" Type="nvarchar" MaxLength="100" /> <Property Name="DateTimeAdded" Type="datetime" /> <Property Name="OrderReserveNumber" Type="nvarchar" MaxLength="50" /> </EntityType> </Schema> </edmx:StorageModels> <!-- CSDL content --> <edmx:ConceptualModels> <Schema Namespace="MyCocoModel" Alias="Self" p1:UseStrongSpatialTypes="false" xmlns:annotation="http://schemas.microsoft.com/ado/2009/02/edm/annotation" xmlns:p1="http://schemas.microsoft.com/ado/2009/02/edm/annotation" xmlns="http://schemas.microsoft.com/ado/2009/11/edm"> <EntityContainer Name="MyCocoEntities" p1:LazyLoadingEnabled="true"> <EntitySet Name="QcFieldValues" EntityType="MyCocoModel.QcFieldValue" /> </EntityContainer> <EntityType Name="QcFieldValue"> <Key> <PropertyRef Name="ID" /> </Key> <Property Name="ID" Type="Int32" Nullable="false" p1:StoreGeneratedPattern="Identity" /> <Property Name="FieldID" Type="String" MaxLength="100" Unicode="true" FixedLength="false" /> <Property Name="FieldValue" Type="String" MaxLength="100" Unicode="true" FixedLength="false" /> <Property Name="DateTimeAdded" Type="DateTime" Precision="3" /> <Property Name="OrderReserveNumber" Type="String" MaxLength="50" Unicode="true" FixedLength="false" /> </EntityType> </Schema> </edmx:ConceptualModels> <!-- C-S mapping content --> <edmx:Mappings> <Mapping Space="C-S" xmlns="http://schemas.microsoft.com/ado/2009/11/mapping/cs"> <EntityContainerMapping StorageEntityContainer="MyCocoModelStoreContainer" CdmEntityContainer="MyCocoEntities"> <EntitySetMapping Name="QcFieldValues"> <EntityTypeMapping TypeName="MyCocoModel.QcFieldValue"> <MappingFragment StoreEntitySet="QcFieldValues"> <ScalarProperty Name="ID" ColumnName="ID" /> <ScalarProperty Name="FieldID" ColumnName="FieldID" /> <ScalarProperty Name="FieldValue" ColumnName="FieldValue" /> <ScalarProperty Name="DateTimeAdded" ColumnName="DateTimeAdded" /> <ScalarProperty Name="OrderReserveNumber" ColumnName="OrderReserveNumber" /> </MappingFragment> </EntityTypeMapping> </EntitySetMapping> </EntityContainerMapping> </Mapping> </edmx:Mappings> </edmx:Runtime> <!-- EF Designer content (DO NOT EDIT MANUALLY BELOW HERE) --> <Designer xmlns="http://schemas.microsoft.com/ado/2009/11/edmx"> <Connection> <DesignerInfoPropertySet> <DesignerProperty Name="MetadataArtifactProcessing" Value="EmbedInOutputAssembly" /> </DesignerInfoPropertySet> </Connection> <Options> <DesignerInfoPropertySet> <DesignerProperty Name="ValidateOnBuild" Value="true" /> <DesignerProperty Name="EnablePluralization" Value="True" /> <DesignerProperty Name="IncludeForeignKeysInModel" Value="True" /> <DesignerProperty Name="CodeGenerationStrategy" Value="None" /> </DesignerInfoPropertySet> </Options> <!-- Diagram content (shape and connector positions) --> <Diagrams></Diagrams> </Designer> </edmx:Edmx>

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >