Search Results

Search found 35536 results on 1422 pages for 'test framework'.

Page 45/1422 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • Problems with inheritance query view and one to many association in entity framework 4

    - by Kazys
    Hi, I have situation in with I stucked and don't know way out. The problem is in my bigger model, but I have made small example which shows the same problem. I have 4 tables. I called them SuperParent, NamedParent, TypedParent and ParentType. NamedParent and TypedParent derives from superParent. TypedParent has one to many association with ParentType. I describe mapping for entities using queryView. The problem is then I want to get TypedParents and Include ParentType I get the following exception: An error occurred while preparing the command definition. See the inner exception for details. --- System.ArgumentException: The ResultType of the specified expression is not compatible with the required type. The expression ResultType is 'Transient.reference[PasibandymaiModel.SuperParent]' but the required type is 'Transient.reference[PasibandymaiModel.TypedParent]'. Parameter name: arguments[1] To get TypedParents I use following code: context.SuperParent.OfType().Include("ParentType"); my edmx file: <edmx:Edmx Version="2.0" xmlns:edmx="http://schemas.microsoft.com/ado/2008/10/edmx"> <!-- EF Runtime content --> <edmx:Runtime> <!-- SSDL content --> <edmx:StorageModels> <Schema Namespace="PasibandymaiModel.Store" Alias="Self" Provider="System.Data.SqlClient" ProviderManifestToken="2005" xmlns:store="http://schemas.microsoft.com/ado/2007/12/edm/EntityStoreSchemaGenerator" xmlns="http://schemas.microsoft.com/ado/2009/02/edm/ssdl"> <EntityContainer Name="PasibandymaiModelStoreContainer"> <EntitySet Name="NamedParent" EntityType="PasibandymaiModel.Store.NamedParent" store:Type="Tables" Schema="dbo" /> <EntitySet Name="ParentType" EntityType="PasibandymaiModel.Store.ParentType" store:Type="Tables" Schema="dbo" /> <EntitySet Name="SuperParent" EntityType="PasibandymaiModel.Store.SuperParent" store:Type="Tables" Schema="dbo" /> <EntitySet Name="TypedParent" EntityType="PasibandymaiModel.Store.TypedParent" store:Type="Tables" Schema="dbo" /> <AssociationSet Name="fk_NamedParent_SuperParent" Association="PasibandymaiModel.Store.fk_NamedParent_SuperParent"> <End Role="SuperParent" EntitySet="SuperParent" /> <End Role="NamedParent" EntitySet="NamedParent" /> </AssociationSet> <AssociationSet Name="fk_TypedParent_ParentType" Association="PasibandymaiModel.Store.fk_TypedParent_ParentType"> <End Role="ParentType" EntitySet="ParentType" /> <End Role="TypedParent" EntitySet="TypedParent" /> </AssociationSet> <AssociationSet Name="fk_TypedParent_SuperParent" Association="PasibandymaiModel.Store.fk_TypedParent_SuperParent"> <End Role="SuperParent" EntitySet="SuperParent" /> <End Role="TypedParent" EntitySet="TypedParent" /> </AssociationSet> </EntityContainer> <EntityType Name="NamedParent"> <Key> <PropertyRef Name="ParentId" /> </Key> <Property Name="ParentId" Type="int" Nullable="false" /> <Property Name="Name" Type="nvarchar" Nullable="false" MaxLength="100" /> </EntityType> <EntityType Name="ParentType"> <Key> <PropertyRef Name="ParentTypeId" /> </Key> <Property Name="ParentTypeId" Type="int" Nullable="false" StoreGeneratedPattern="Identity" /> <Property Name="Name" Type="nvarchar" MaxLength="100" /> </EntityType> <EntityType Name="SuperParent"> <Key> <PropertyRef Name="ParentId" /> </Key> <Property Name="ParentId" Type="int" Nullable="false" StoreGeneratedPattern="Identity" /> <Property Name="SomeAttribute" Type="nvarchar" Nullable="false" MaxLength="100" /> </EntityType> <EntityType Name="TypedParent"> <Key> <PropertyRef Name="ParentId" /> </Key> <Property Name="ParentId" Type="int" Nullable="false" /> <Property Name="ParentTypeId" Type="int" Nullable="false"/> </EntityType> <Association Name="fk_NamedParent_SuperParent"> <End Role="SuperParent" Type="PasibandymaiModel.Store.SuperParent" Multiplicity="1" /> <End Role="NamedParent" Type="PasibandymaiModel.Store.NamedParent" Multiplicity="0..1" /> <ReferentialConstraint> <Principal Role="SuperParent"> <PropertyRef Name="ParentId" /> </Principal> <Dependent Role="NamedParent"> <PropertyRef Name="ParentId" /> </Dependent> </ReferentialConstraint> </Association> <Association Name="fk_TypedParent_ParentType"> <End Role="ParentType" Type="PasibandymaiModel.Store.ParentType" Multiplicity="1" /> <End Role="TypedParent" Type="PasibandymaiModel.Store.TypedParent" Multiplicity="*" /> <ReferentialConstraint> <Principal Role="ParentType"> <PropertyRef Name="ParentTypeId" /> </Principal> <Dependent Role="TypedParent"> <PropertyRef Name="ParentTypeId" /> </Dependent> </ReferentialConstraint> </Association> <Association Name="fk_TypedParent_SuperParent"> <End Role="SuperParent" Type="PasibandymaiModel.Store.SuperParent" Multiplicity="1" /> <End Role="TypedParent" Type="PasibandymaiModel.Store.TypedParent" Multiplicity="0..1" /> <ReferentialConstraint> <Principal Role="SuperParent"> <PropertyRef Name="ParentId" /> </Principal> <Dependent Role="TypedParent"> <PropertyRef Name="ParentId" /> </Dependent> </ReferentialConstraint> </Association> <Function Name="ChildDelete" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="ChildId" Type="int" Mode="In" /> </Function> <Function Name="ChildInsert" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="Name" Type="nvarchar" Mode="In" /> <Parameter Name="ParentId" Type="int" Mode="In" /> </Function> <Function Name="ChildUpdate" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="ChildId" Type="int" Mode="In" /> <Parameter Name="ParentId" Type="int" Mode="In" /> <Parameter Name="Name" Type="nvarchar" Mode="In" /> </Function> <Function Name="NamedParentDelete" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="ParentId" Type="int" Mode="In" /> </Function> <Function Name="NamedParentInsert" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="Name" Type="nvarchar" Mode="In" /> <Parameter Name="SomeAttribute" Type="nvarchar" Mode="In" /> </Function> <Function Name="NamedParentUpdate" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="ParentId" Type="int" Mode="In" /> <Parameter Name="SomeAttribute" Type="nvarchar" Mode="In" /> <Parameter Name="Name" Type="nvarchar" Mode="In" /> </Function> <Function Name="ParentTypeDelete" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="ParentTypeId" Type="int" Mode="In" /> </Function> <Function Name="ParentTypeInsert" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="Name" Type="nvarchar" Mode="In" /> </Function> <Function Name="ParentTypeUpdate" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="ParentTypeId" Type="int" Mode="In" /> <Parameter Name="Name" Type="nvarchar" Mode="In" /> </Function> <Function Name="TypedParentDelete" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="ParentId" Type="int" Mode="In" /> </Function> <Function Name="TypedParentInsert" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="ParentTypeId" Type="int" Mode="In" /> <Parameter Name="SomeAttribute" Type="nvarchar" Mode="In" /> </Function> <Function Name="TypedParentUpdate" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="ParentId" Type="int" Mode="In" /> <Parameter Name="SomeAttribute" Type="nvarchar" Mode="In" /> <Parameter Name="ParentTypeId" Type="int" Mode="In" /> </Function> </Schema> </edmx:StorageModels> <!-- CSDL content --> <edmx:ConceptualModels> <Schema Namespace="PasibandymaiModel" Alias="Self" xmlns:annotation="http://schemas.microsoft.com/ado/2009/02/edm/annotation" xmlns="http://schemas.microsoft.com/ado/2008/09/edm"> <EntityContainer Name="PasibandymaiEntities" annotation:LazyLoadingEnabled="true"> <EntitySet Name="ParentType" EntityType="PasibandymaiModel.ParentType" /> <EntitySet Name="SuperParent" EntityType="PasibandymaiModel.SuperParent" /> <AssociationSet Name="ParentTypeTypedParent" Association="PasibandymaiModel.ParentTypeTypedParent"> <End Role="ParentType" EntitySet="ParentType" /> <End Role="TypedParent" EntitySet="SuperParent" /> </AssociationSet> </EntityContainer> <EntityType Name="NamedParent" BaseType="PasibandymaiModel.SuperParent"> <Property Type="String" Name="Name" Nullable="false" MaxLength="100" FixedLength="false" Unicode="true" /> </EntityType> <EntityType Name="ParentType"> <Key> <PropertyRef Name="ParentTypeId" /> </Key> <Property Type="Int32" Name="ParentTypeId" Nullable="false" annotation:StoreGeneratedPattern="Identity" /> <Property Type="String" Name="Name" MaxLength="100" FixedLength="false" Unicode="true" /> <NavigationProperty Name="TypedParent" Relationship="PasibandymaiModel.ParentTypeTypedParent" FromRole="ParentType" ToRole="TypedParent" /> </EntityType> <EntityType Name="SuperParent" Abstract="true"> <Key> <PropertyRef Name="ParentId" /> </Key> <Property Type="Int32" Name="ParentId" Nullable="false" annotation:StoreGeneratedPattern="Identity" /> <Property Type="String" Name="SomeAttribute" Nullable="false" MaxLength="100" FixedLength="false" Unicode="true" /> </EntityType> <EntityType Name="TypedParent" BaseType="PasibandymaiModel.SuperParent"> <NavigationProperty Name="ParentType" Relationship="PasibandymaiModel.ParentTypeTypedParent" FromRole="TypedParent" ToRole="ParentType" /> <Property Type="Int32" Name="ParentTypeId" Nullable="false" /> </EntityType> <Association Name="ParentTypeTypedParent"> <End Type="PasibandymaiModel.ParentType" Role="ParentType" Multiplicity="1" /> <End Type="PasibandymaiModel.TypedParent" Role="TypedParent" Multiplicity="*" /> <ReferentialConstraint> <Principal Role="ParentType"> <PropertyRef Name="ParentTypeId" /> </Principal> <Dependent Role="TypedParent"> <PropertyRef Name="ParentTypeId" /> </Dependent> </ReferentialConstraint> </Association> </Schema> </edmx:ConceptualModels> <!-- C-S mapping content --> <edmx:Mappings> <Mapping Space="C-S" xmlns="http://schemas.microsoft.com/ado/2008/09/mapping/cs"> <EntityContainerMapping StorageEntityContainer="PasibandymaiModelStoreContainer" CdmEntityContainer="PasibandymaiEntities"> <EntitySetMapping Name="ParentType"> <QueryView> SELECT VALUE PasibandymaiModel.ParentType(tp.ParentTypeId, tp.Name) FROM PasibandymaiModelStoreContainer.ParentType AS tp </QueryView> </EntitySetMapping> <EntitySetMapping Name="SuperParent"> <QueryView> SELECT VALUE CASE WHEN (np.ParentId IS NOT NULL) THEN PasibandymaiModel.NamedParent(sp.ParentId, sp.SomeAttribute, np.Name) WHEN (tp.ParentId IS NOT NULL) THEN PasibandymaiModel.TypedParent(sp.ParentId, sp.SomeAttribute, tp.ParentTypeId) END FROM PasibandymaiModelStoreContainer.SuperParent AS sp LEFT JOIN PasibandymaiModelStoreContainer.NamedParent AS np ON sp.ParentId = np.ParentId LEFT JOIN PasibandymaiModelStoreContainer.TypedParent AS tp ON sp.ParentId = tp.ParentId </QueryView> <QueryView TypeName="PasibandymaiModel.TypedParent"> SELECT VALUE PasibandymaiModel.TypedParent(sp.ParentId, sp.SomeAttribute, tp.ParentTypeId) FROM PasibandymaiModelStoreContainer.SuperParent AS sp INNER JOIN PasibandymaiModelStoreContainer.TypedParent AS tp ON sp.ParentId = tp.ParentId </QueryView> <QueryView TypeName="PasibandymaiModel.NamedParent"> SELECT VALUE PasibandymaiModel.NamedParent(sp.ParentId, sp.SomeAttribute, np.Name) FROM PasibandymaiModelStoreContainer.SuperParent AS sp INNER JOIN PasibandymaiModelStoreContainer.NamedParent AS np ON sp.ParentId = np.ParentId </QueryView> </EntitySetMapping> </EntityContainerMapping> </Mapping> </edmx:Mappings> </edmx:Runtime> </edmx:Edmx> I have tried using AssociationSetMapping instead of using Association with ReferentialConstraint. But then couldn't insert related entities at once, becouse entity framework didn't provided entity key of inserted entities for related entities. Thanks for any idea

    Read the article

  • App Engine Hangout - chat with an App Engine Software Engineer in Test

    App Engine Hangout - chat with an App Engine Software Engineer in Test We'll be chatting with Robert Schuppenies, who is an App Engine Software Engineer in Test. He'll describe a bit about what he does, and talk about/demo some App Engine test frameworks, like the testbed module, code.google.com and code.google.com From: GoogleDevelopers Views: 0 0 ratings Time: 00:00 More in Science & Technology

    Read the article

  • How to drastically improve code coverage?

    - by Peter Kofler
    I'm tasked with getting a legacy application under unit test. First some background about the application: It's a 600k LOC Java RCP code base with these major problems massive code duplication no encapsulation, most private data is accessible from outside, some of the business data also made singletons so it's not just changeable from outside but also from everywhere. no business model, business data is stored in Object[] and double[][], so no OO. There is a good regression test suite and an efficient QA team is testing and finding bugs. I know the techniques how to get it under test from classic books, e.g. Michael Feathers, but that's too slow. As there is a working regression test system I'm not afraid to aggressively refactor the system to allow unit tests to be written. How should I start to attack the problem to get some coverage quickly, so I'm able to show progress to management (and in fact to start earning from safety net of JUnit tests)? I do not want to employ tools to generate regression test suites, e.g. AgitarOne, because these tests do not test if something is correct.

    Read the article

  • Mock Objects for Testing - Test Automation Engineer Perspective

    - by user9009
    Hello How often QA engineers are responsible for developing Mock Objects for Unit Testing. So dealing with Mock Objects is just developer job ?. The reason i ask is i'm interested in QA as my career and am learning tools like JUnit , TestNG and couple of frameworks. I just want to know until what level of unit testing is done by developer and from what point QA engineer takes over testing for better test coverage ? Thanks Edit : Based on the answers below am providing more details about what QA i was referring to . I'm interested in more of Test Automation rather than simple QA involved in record and play of script. So Test Automation engineers are responsible for developing frameworks ? or do they have a team of developers dedicated in Framework development ? Yes i was asking about usage of Mock Objects for testing from Test Automation engineer perspective.

    Read the article

  • TDD: Write a separate test for object initialization or relying on other tests exercising it

    - by DXM
    This seems to be the common pattern that's emerging in some of the tests I've worked on lately. We have a class, and quite often this is legacy code whose design can't be easily altered, which has a bunch of member variables. There's some kind of "Initialize" or "Load" function which would put an object into a valid state. Only after it is initialized/loaded, are the members in the proper state so that other methods can be exercised. So when we start writing tests, first test is "TestLoad" and all we put in there is exercising initialization logic. Then we might add one (or few) TestLoadFailureXXX tests and those are definitely valuable. Then we start writing tests to verify other behaviors but all of them require the object to be loaded. So they all start by running exactly the same code as "TestLoad". So my question: Is TestLoad even necessary? Do you take it and let other tests simply exercise the loading? Or leave it so things are more explicit? I know that each unit test function should have no (or as little as possible) overlap with other test functions, but it seems like in cases of loading, this is unavoidable. And whether we like it or not, if something in the loading code breaks, we will end up with a whole test suite of failures. Is there another approach that I might be missing here? Thank you for the responses. It definitely makes sense that you want to see "InitializationTest" and if that fails you know where to start looking. In case it matters, this question is mostly about C++ and we use CppUnit framework. And now, thanks to sleske, I'll be constantly wishing that CppUnit supported test dependencies. Might have to hack something in one of these days :)

    Read the article

  • Test Driven Development Code Order

    - by Bobby Kostadinov
    I am developing my first project using test driven development. I am using Zend Framework and PHPUnit. Currently my project is at 100% code coverage but I am not sure I understand in what order I am supposed to write my code. Am I supposed to write my test FIRST with what my objects are expected to do or write my objects and then test them? Ive been working on completing a controller/model and then writing at test for it but I am not sure this is what TDD is about? Any advice? For example, I wrote my Auth plugin and my Auth controller and tested that they work properly in my browser, and then I sat down to write the tests for them, which proved that there were some logical errors in the code that did work in the browser.

    Read the article

  • Test-Driven Development with plain C: manage multiple modules

    - by Angelo
    I am new to test-driven development, but I'm loving it. There is, however, a main problem that prevents me from using it effectively. I work for embedded medical applications, plain C, with safety issues. Suppose you have module A that has a function A_function() that I want to test. This function call a function B_function, implemented in module B. I want to decouple the module so, as James Grenning teaches, I create a Mock module B that implements a mock version of B_function. However the day comes when I have to implement module B with the real version of B_function. Of course the two B_function can not live in the same executable, so I don't know how to have a unique "launcher" to test both modules. James Grenning way out is to replace, in module A, the call to B_function with a function pointer that can have the value of the mock or the real function according to the need. However I work in a team, and I can not justify this decision that would make no sense if it were not for the test, and no one asked me explicitly to use test-driven approach. Maybe the only way out is to generate different a executable for each module. Any smarter solution? Thank you

    Read the article

  • Modules and custom routes

    - by Dennis Haarbrink
    I'm building a website using Zend Framework and having trouble implementing modules and custom routes. There are basically two rules: Select a module based on the domain (multiple domains can select a single module) Regardless of domain, select one specific module based on path Examples: domain1.com selects module domain1 domain1.net selects module domain1 domain2.com selects module domain2 both domain1.com/admin and domain2.com/admin select module admin This is the first project where I use ZF, so my experience with the framework is basically non-existent. I have done some dirty hacking in my bootstrapper where I check the domain and than execute Zend_Layout::startMVC() to get the correct layout, but that is messed up when I'm implementing custom routes. So I was wondering what is the best way to go about implementing this?

    Read the article

  • If you should only have one assertion per test; how to test multiple inputs?

    - by speg
    I'm trying to build up some test cases, and have read that you should try and limit the number of assertions per test case. So my question is, what is the best way to go about testing a function w/ multiple inputs. For example, I have a function that parses a string from the user and returns the number of minutes. The string can be in the form "5w6h2d1m", where w, h, d, m correspond to the number of weeks, hours, days, and minutes. If I wanted to follow the '1 assertion per test rule' I'd have to make multiple tests for each variation of input? That seems silly so instead I just have something like: self.assertEqual(parse_date('5m'), 5) self.assertEqual(parse_date('5h'), 300) self.assertEqual(parse_date('5d') ,7200) self.assertEqual(parse_date('1d4h20m'), 1700) In the one test case. Is there a better way?

    Read the article

  • ORACLE UK TECHNOLOGY “TEST FEST”

    - by mseika
    ORACLE UK TECHNOLOGY “TEST FEST” Join us at the UKOUG Conference at the ICC in Birmingham and Take your OPN Implementation Specialist Exam for Free! 3-5 December 2012, ICC Birmingham (UK) Dear Oracle Partner,** As a priority partner, we are sending you advance notice of these exclusive “Technology Test Fest” free examination sessions. Please note that this communication will be sent out to the wider community one week from today, so please register immediately to secure your place! ** We are delighted to offer you the exclusive opportunity to register and attend the Oracle UK “Technology Test Fest” being held as Part of the UKOUG Conference at the ICC in Birmingham in the Drawing Room at the Hyatt Regency hotel adjacent to the ICC venue, from 3rd to 5th December 2012.This is your opportunity to sit your chosen Oracle Technology Specialist Implementation Exam free of charge on this day. Four sessions are being run (10.00AM and 14.00PM), with just 15 places at each session – so register now to avoid disappointment! (Exams take about 1.5 hours to complete.) REGISTER - 3 December Afternoon Session - 2:00pm REGISTER - 4 December Morning Session 10:00am REGISTER - 4 December Afternoon Session - 2:00pm REGISTER - 5 December Morning Session - 10:00am Price: FREE Address:The Drawing Room Hyatt Regency Hotel Birmingham 2 Bridge Street Birmingham BI 2JZ 3 - 5 December 2012 Which Implementation Specialist Exams are available to take?Click here to see the list of exams available for you to sit for free at the Oracle UKOUG “Technology Test Fest”. The links also include the study guide for the particular exam. Please review the Specialization Guide as well. How do I register for the Oracle UK “Technology Test Fest”? Fill out the Pearson Vue profile HERE and complete it with your OPN Company ID. NB: Instructions on how to create/update the profile can be found HERE. Register for one of the 4 sessions using the registration links at the top of this page You will need to bring your own laptop with 'Windows OS' and a form of identification to be able to take any of the exams. Need Help or Advice?For more information about the tests and Get Specialized programme, please contact: [email protected] issues with your profile or any other OPN-related problems, please contact our Oracle Partner Business Centre: [email protected] or call 08705 194 194. We look forward to welcoming you to the Oracle UK “Technology Test Fest” on the 3rd- 5thDecember 2012! Book early to avoid disappointment.

    Read the article

  • Unittest test case only touches the file name

    - by Chen OT
    I was told that unittest is fast and the tests which touches DB, across network, and touches FileSystem are not unittest. In one of my testcases, its input are the file names (amount about 300~400) under a specific folder. Although these input are part of file system, the execution time of this test is very fast. Should I moved this test, which is fast but touches file system, to higher level test?

    Read the article

  • New release for the Visual Studio 2010 and .NET Framework 4 Training Kit

    - by Enrique Lima
    Among the new content in the release, is a set of ALM docs and labs. The ALM content referenced above is: o Using Code Analysis with Visual Studio 2010 to Improve Code Quality o Introduction to Exploratory Testing with Microsoft Test Manager 2010 o Introduction to Platform Testing with Microsoft Test Manager 2010 o Introduction to Quality Tracking with Visual Studio 2010 o Introduction to Test Planning with Microsoft Test Manager 2010 All ALM labs point to the latest version of the VS 2010 RTM VM. You can download the Training Kit from :  http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=23507 Visit the online content: http://msdn.microsoft.com/en-us/VS2010TrainingCourse Download the most recent version of the Visual Studio: http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=240

    Read the article

  • Audio is working, but the speaker test doesn't work

    - by Pacquo
    I've installed Ubuntu 12.04 LTS from a minimal cd on a netbook (Asus 1001 PXD). I've installed the ubuntu-desktop package using the --no-install-recommends option. Everything works fine, except the "sound test" for headphones or analog speakers. Clicking on the "test" buttons (front left and front right) I don't hear any sound. Despite this, the audio is working properly. I've checked the audio levels with alsamixer; I have also checked that the test sounds actually exist in /usr/share/sounds/alsa. I tried an installation of Ubuntu 12.04 LTS made with the desktop-cd, and in this case the speaker test works properly. I suppose, therefore, that the problem could depend on the lack of a package, but I have not identified which one.

    Read the article

  • Interpreting and using the Asterisk "timing test" command

    - by zigg
    Timing is very important for certain kinds of applications in Asterisk. If DAHDI is the timing source, the dahdi_test command can be used to check the timing provided by the DAHDI kernel module. If dahdi_test returns exclusively measurements above 99.975%, the DAHDI timing source is generally considered good. Since Asterisk 1.6, new timing sources have become available, such as pthread and timerfd. The accuracy of these timing sources seems to be measurable with the Asterisk CLI timing test command: localhost*CLI> timing test Attempting to test a timer with 50 ticks per second. Using the 'timerfd' timing module for this test. It has been 1000 milliseconds, and we got 50 timer ticks My concern is that timing 50 ticks seems to be a considerably less stressful test than dahdi_test's 8192 samples in 8000 ms, particularly since just about every system I've tried it on, virtual or otherwise, can handle it. I can ask timing test to ramp it up to what I think are dahdi_test's standards: localhost*CLI> timing test 1024 Attempting to test a timer with 1024 ticks per second. Using the 'timerfd' timing module for this test. It has been 1000 milliseconds, and we got 1024 timer ticks This will indeed break down a bit depending on the system I'm using, usually with a decrease in timer ticks. But I'm not sure whether this is useful to stress it to this level. Is there authoritative guidance on using and interpreting the timing test command to insure that a given Asterisk system has a timing source that will work well?

    Read the article

  • How to test nginx proxy timeouts

    - by mkorszun
    Target: I would like to test all Nginx proxy timeout parameters in very simple scenario. My first approach was to create really simple HTTP server and put some timeouts: Between listen and accept to test proxy_connect_timeout Between accept and read to test proxy_send_timeout Between read and send to test proxy_read_timeout Test: 1) Server code (python): import socket import os import time import threading def http_resp(conn): conn.send("HTTP/1.1 200 OK\r\n") conn.send("Content-Length: 0\r\n") conn.send("Content-Type: text/xml\r\n\r\n\r\n") def do(conn, addr): print 'Connected by', addr print 'Sleeping before reading data...' time.sleep(0) # Set to test proxy_send_timeout data = conn.recv(1024) print 'Sleeping before sending data...' time.sleep(0) # Set to test proxy_read_timeout http_resp(conn) print 'End of data stream, closing connection' conn.close() def main(): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.bind(('', int(os.environ['PORT']))) s.listen(1) print 'Sleeping before accept...' time.sleep(130) # Set to test proxy_connect_timeout while 1: conn, addr = s.accept() t = threading.Thread(target=do, args=(conn, addr)) t.start() if __name__ == "__main__": main() 2) Nginx configuration: I have extended Nginx default configuration by setting explicitly proxy_connect_timeout and adding proxy_pass pointing to my local HTTP server: location / { proxy_pass http://localhost:8888; proxy_connect_timeout 200; } 3) Observation: proxy_connect_timeout - Even though setting it to 200s and sleeping only 130s between listen and accept Nginx returns 504 after ~60s which might be because of the default proxy_read_timeout value. I do not understand how proxy_read_timeout could affect connection at so early stage (before accept). I would expect 200 here. Please explain! proxy_send_timeout - I am not sure if my approach to test proxy_send_timeout is correct - i think i still do not understand this parameter correctly. After all, delay between accept and read does not force proxy_send_timeout. proxy_read_timeout - it seems to be pretty straightforward. Setting delay between read and write does the job. So I guess my assumptions are wrong and probably I do not understand proxy_connect and proxy_send timeouts properly. Can some explain them to me using above test if possible (or modifying if required).

    Read the article

  • LLBLGen Pro v3.0 with Entity Framework v4.0 (12m video)

    - by FransBouma
    Today I recorded a video in which I illustrate some of the database-first functionality available in LLBLGen Pro v3.0. LLBLGen Pro v3.0 also supports model-first functionality, which I hope to illustrate in an upcoming video. LLBLGen Pro v3.0 is currently in beta and is scheduled to RTM some time in May 2010. It supports the following frameworks out of the box, with more scheduled to follow in the coming year: LLBLGen Pro RTL (our own o/r mapper framework), Linq to Sql, NHibernate and Entity Framework (v1 and v4). The video I linked to below illustrates the creation of an entity model for Entity Framework v4, by reverse engineering the SQL Server 2008 example database 'AdventureWorks'. The following topics (among others) are included in the video: Abbreviation support (example: convert 'Qty' into 'Quantity' during name construction) Flexible, framework specific settings Attribute definitions for various elements (so no requirement for buddy-classes or messing with generated code or templates) Retrieval of relational model data from a database Reverse engineering of tables into entities, automatically placed in groups Auto-creation of inheritance hierarchies Refactoring of entity fields into Value Type Definitions (DDD) Mapping a Typed view onto a stored procedure resultset Creation of a Typed list (definition of a query with a projection) on a set of related entities Validation and correction of found inconsistencies and errors Generating code using one of the pre-defined presets Illustration of the code in vs.net 2010 It also gives a good overview of what it takes with LLBLGen Pro v3.0 to start from a new project, point it to a database, get an entity model, perform tweaks and validation and generate code which is ready to run. I am no video recording expert so there's no audio and some mouse movements might be a little too quickly. If that's the case, please pause the video. It's rather big (52MB). Click here to open the HTML page with the video (Flash). Opens in a new window. LLBLGen Pro v3.0 is currently in beta (available for v2.x customers) and scheduled to be released somewhere in May 2010.

    Read the article

  • Visual Studio Talk Show #115 is now online - Entity Framework 4 (French)

    - by guybarrette
    http://www.visualstudiotalkshow.com Matthieu Mezil: Entity Framework 4 Nous discutons avec Matthieu Mezil de la version 4 de Entity Framework (EF4). Entre autres, on évaluera avec Matthieu en quoi cette nouvelle version qui sera inclus avec Visual Studio 2010 permet de concevoir un ORM (Object Relational Mapper) avec une implémentation Agile. Matthieu Mezil est consultant formateur chez Access IT à Paris. MVP C# et speaker INETA, il s’est spécialisé sur l’Entity Framework. Il anime régulièrement des conférences sur ce sujet, notamment dans le cadre d’évènements Microsoft. MCT, Matthieu a également écrit plusieurs formations sur la POO, le langage C# et bien sûr sur l’Entity Framework qu’il anime fréquemment. Dans le cadre de son travail, il est souvent amené à travailler avec le Microsoft Technology Center de Paris. Matthieu est également un bloggeur important: en français sur http://blogs.codes-sources.com/matthieu et en anglais sur http://msmvps.com/blogs/matthieu. Télécharger l'émission Si vous désirez un accès direct au fichier audio en format MP3, nous vous invitons à télécharger le fichier en utilisant un des boutons ci-dessous. Si vous désirez utiliser le feed RSS pour télécharger l'émission, nous vous invitons à vous abonnez en utilisant le bouton ci-dessous. Si vous désirez utiliser le répertoire iTunes Podcast pour télécharger l'émission, nous vous encourageons à vous abonnez en utilisant le bouton ci-dessous. var addthis_pub="guybarrette";

    Read the article

  • How do i assert my exception message with JUNIT Test annotation ?

    - by Cshah
    i have written a few junits with @Test annotation. If my test method throws a checked exception and if i want to assert the message along with the exception, is there a way to do so with JUNIT @Test annotation.AFAIK, Junit 4.7 doesnt provide this feature but does any future versions provide it. I know in .NET you can assert the message and the exception class. Looking for similar feature in the java world. This is what i want @Test (expected = RuntimeException.class, message = "Employee ID is null") public void shouldThrowRuntimeExceptionWhenEmployeeIDisNull() { }

    Read the article

  • Junit run not picking file src/test/resources. For file required by some dependency jar

    - by saddy-dj
    Hi, I m facing a issue where test/resource is not picked,but instead jar's main/resource is picked Scenario is like : Myproject src/test/resources--- have config.xml w which should be needed by abc.jar which is a dependecy in Myproject. When running test case for Myproject its loading config.xml of abc.jar instead of Myproject test/resources. - I need to know order in which maven pick resources. - Or wat im trying is not possible. Thanks.

    Read the article

  • bulk insert and update with ADO.NET Entity Framework

    - by Keith Barrows
    I am writing a small application that does a lot of feed processing. I want to use LINQ EF for this as speed is not an issue, it is a single user app and, in the end, will only be used once a month. My questions revolves around the best way to do bulk inserts using LINQ EF. After parsing the incoming data stream I end up with a List of values. Since the end user may end up trying to import some duplicate data I would like to "clean" the data during insert rather than reading all the records, doing a for loop, rejecting records, then finally importing the remainder. This is what I am currently doing: DateTime minDate = dataTransferObject.Min(c => c.DoorOpen); DateTime maxDate = dataTransferObject.Max(c => c.DoorOpen); using (LabUseEntities myEntities = new LabUseEntities()) { var recCheck = myEntities.ImportDoorAccess.Where(a => a.DoorOpen >= minDate && a.DoorOpen <= maxDate).ToList(); if (recCheck.Count > 0) { foreach (ImportDoorAccess ida in recCheck) { DoorAudit da = dataTransferObject.Where(a => a.DoorOpen == ida.DoorOpen && a.CardNumber == ida.CardNumber).First(); if (da != null) da.DoInsert = false; } } ImportDoorAccess newIDA; foreach (DoorAudit newDoorAudit in dataTransferObject) { if (newDoorAudit.DoInsert) { newIDA = new ImportDoorAccess { CardNumber = newDoorAudit.CardNumber, Door = newDoorAudit.Door, DoorOpen = newDoorAudit.DoorOpen, Imported = newDoorAudit.Imported, RawData = newDoorAudit.RawData, UserName = newDoorAudit.UserName }; myEntities.AddToImportDoorAccess(newIDA); } } myEntities.SaveChanges(); } I am also getting this error: System.Data.UpdateException was unhandled Message="Unable to update the EntitySet 'ImportDoorAccess' because it has a DefiningQuery and no element exists in the element to support the current operation." Source="System.Data.SqlServerCe.Entity" What am I doing wrong? Any pointers are welcome.

    Read the article

  • Problem with Entity Framework : "The underlying provider failed on Open"

    - by pokrate
    Hi, When I try to insert a record, I get this error : The underlying provider failed on Open. This error occurs only with IIS and not with VWD 2008's webserver. In the EventViewer I get this Application Error : Failed to generate a user instance of SQL Server due to a failure in starting the process for the user instance. The connection will be closed. [CLIENT: ] <add name="ASPNETDBEntities" connectionString="metadata=res://*/Models.FriendList.csdl|res://*/Models.FriendList.ssdl|res://*/Models.FriendList.msl;provider=System.Data.SqlClient;provider connection string=&quot;Data Source=.\SQLEXPRESS;AttachDbFilename=|DataDirectory|\ASPNETDB.MDF;Integrated Security=True;Connect Timeout=30;User Instance=True;MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient" /> I am using aspnetdb.mdf file, and not any external database. I have searched enough for this, but no use. Everything works fine with VWD webserver

    Read the article

  • Entity Framework - An object with the same key already exists in the ObjectStateManager

    - by Justin
    Hey all, I'm trying to update a detached entity in .NET 4 EF: [AcceptVerbs(HttpVerbs.Post)] public ActionResult Save(Developer developer) { developer.UpdateDate = DateTime.Now; if (developer.DeveloperID == 0) {//inserting new developer. DataContext.DeveloperData.Insert(developer); } else {//attaching existing developer. DataContext.DeveloperData.Attach(developer); } //save changes. DataContext.SaveChanges(); //redirect to developer list. return RedirectToAction("Index"); } public static void Attach(Developer developer) { var d = new Developer { DeveloperID = developer.DeveloperID }; db.Developers.Attach(d); db.Developers.ApplyCurrentValues(developer); } However, this gives the following error: An object with the same key already exists in the ObjectStateManager. The ObjectStateManager cannot track multiple objects with the same key. Anyone know what I'm missing? Thanks, Justin

    Read the article

  • Creating blob properties with Entity Framework 4?

    - by David Veeneman
    I am creating an EF4 model-first application with a WPF UI. One of the controls on my UI is a RichTextDocument, which outputs a WPF FlowDocument. I can either serialize the FlowDocument to a byte array, or extract its XAML markup as a string. I would prefer to use binary serialization, if I can. Here are my questions: If I serialize to a byte array, how do I specify an entity property as a byte array in the EDM Designer? If I extract a XAML markup string, can I specify that the EDM Designer create the corresponding database column as a nvarchar(max) column? As to the second question, I assume I could always manually edit the MyModel.edmx.sql file to change the data type from nvarchar(4000) to nvarchar(max) before executing it, but I would like to know if it can be done in the Designer. Thanks for your help.

    Read the article

  • SqlBulkCopy and Entity Framework

    - by KP
    My current project consists of 3 standard layers: data, business, and presentation. I would like to use data entities for all my data access needs. Part of the functionality of the app will that it will need to copy all data within a flat file into a database. The file is not so big so I can use SqlBulkCopy. I have found several articles regarding the usage of SqlBulkCopy class in .NET. However, all the articles are using DataTables to move data back and forth. Is there a way to use data entities along with SqlBulkCopy or will I have to use DataTables?

    Read the article

  • Entity Framework + AutoMapper ( Entity to DTO and DTO to Entity )

    - by vbobruisk
    Hello. i got some problems using EF with AutoMapper. =/ for example : i got 2 related entities ( Customers and Orders ) and theyr DTO classes : class CustomerDTO { public string CustomerID {get;set;} public string CustomerName {get;set;} public IList< OrderDTO Orders {get;set;} } class OrderDTO { public string OrderID {get;set;} public string OrderDetails {get;set;} public CustomerDTO Customers {get;set;} } //when mapping Entity to DTO the code works Customers cust = getCustomer(id); Mapper.CreateMap< Customers, CustomerDTO (); Mapper.CreateMap< Orders, OrderDTO (); CustomerDTO custDTO = Mapper.Map(cust); //but when i try to map back from DTO to Entity it fails with AutoMapperMappingException. Mapper.Reset(); Mapper.CreateMap< CustomerDTO , Customers (); Mapper.CreateMap< OrderDTO , Orders (); Customers customerModel = Mapper.Map< CustomerDTO ,Customers (custDTO); // exception is thrown here Am i doeing something wrong ? Thanks in Advance !

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >