Search Results

Search found 12476 results on 500 pages for 'unit testing'.

Page 371/500 | < Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >

  • What ever happened to the Defense Software Reuse System (DSRS)?

    - by emddudley
    I've been reading some papers from the early 90s about a US Department of Defense software reuse initiative called the Defense Software Reuse System (DSRS). The most recent mention of it I could find was in a paper from 2000 - A Survey of Software Reuse Repositories Defense Software Repository System (DSRS) The DSRS is an automated repository for storing and retrieving Reusable Software Assets (RSAs) [14]. The DSRS software now manages inventories of reusable assets at seven software reuse support centers (SRSCs). The DSRS serves as a central collection point for quality RSAs, and facilitates software reuse by offering developers the opportunity to match their requirements with existing software products. DSRS accounts are available for Government employees and contractor personnel currently supporting Government projects... ...The DoD software community is trying to change its software engineering model from its current software cycle to a process-driven, domain-specific, architecture-based, repository-assisted way of constructing software [15]. In this changing environment, the DSRS has the highest potential to become the DoD standard reuse repository because it is the only existing deployed, operational repository with multiple interoperable locations across DoD. Seven DSRS locations support nearly 1,000 users and list nearly 9,000 reusable assets. The DISA DSRS alone lists 3,880 reusable assets and has 400 user accounts... The far-term strategy of the DSRS is to support a virtual repository. These interconnected repositories will provide the ability to locate and share reusable components across domains and among the services. An effective and evolving DSRS is a central requirement to the success of the DoD software reuse initiative. Evolving DoD repository requirements demand that DISA continue to have an operational DSRS site to support testing in an actual repository operation and to support DoD users. The classification process for the DSRS is a basic technology for providing customer support [16]. This process is the first step in making reusable assets available for implementing the functional and technical migration strategies. ... [14] DSRS - Defense Technology for Adaptable, Reliable Systems URL: http://ssed1.ims.disa.mil/srp/dsrspage.html [15] STARS - Software Technology for Adaptable, Reliable Systems URL: http://www.stars.ballston.paramax.com/index.html [16] D. E. Perry and S. S. Popovitch, “Inquire: Predicate-based use and reuse,'' in Proceedings of the 8th Knowledge-Based Software Engineering Conference, pp. 144-151, September 1993. ... Is DSRS dead, and were there any post-mortem reports on it? Are there other more-recent US government initiatives or reports on software reuse?

    Read the article

  • Upgrades from Beta or CTP SQL Server Software is NOT Supported

    - by BuckWoody
    As of this writing, SQL Server 2008 R2 has released, and just like every release, I get e-mails and calls from folks with this question: “Can I upgrade from Customer Technical Preview (CTP) x or Beta #x or Release Candidate (RC) to the “Released to Manufacturing” (RTM) version?” No. Right up until the last minute, things are changing in the code – and you want that to happen. Our internal testing runs right up until the second we lock down for release, and we watch the CTP/RC/Beta reports to make sure there are no show-stoppers, and fix what we find. And it’s not just “big” changes you need to worry about – a simple change in one line of code can have a massive effect. I know, I know – you’ve possibly upgraded an RC or CTP to the RTM version and it worked “just fine”. But hear this tale: I’ve dealt with someone who faced this exact situation in SQL Server 2008. They upgraded (which is clearly prohibited in the documentation) from a CTP to the RTM version over a year ago. Everything was working fine. But then…one day they had an issue. Couldn’t fix it themselves, we took a look, days went by, and we finally had to call in the big guns for support. Turns out, the upgrade was the problem. So we had to come up with some elaborate schemes to get the system migrated over while they were in production. This was painful for everyone involved. So the answer is still no. Just don’t do it. There is one caveat to this story – if you are a “TAP” customer (you’ll know if you are), we help you move from the CTP products to RTM, but that’s a special case that we track carefully and send along special instructions and tools to help you along. That level of effort isn’t possible on a large scale, so it’s not just a magic tool that we run to upgrade from CTP to RTM. So again, unless you’re a TAP customer, it’s a no-no. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • C# Collision Math Help

    - by user36037
    I am making my own collision detection in MonoGame. I have a PolyLine class That has a property to return the normal of that PolyLine instance. I have a ConvexPolySprite class that has a List LineSegments. I hav a CircleSprite class that has a Center Property and a Radius Property. I am using a static class for the collision detection method. I am testing it on a single line segment. Vector2(200,0) = Vector2(300, 200) The problem is it detects the collision anywhere along the path of line out into space. I cannot figure out why. Thanks in advance; public class PolyLine { //--------------------------------------------------------------------------------------------------------------------------- // Class Properties /// <summary> /// Property for the upper left-hand corner of the owner of this instance /// </summary> public Vector2 ParentPosition { get; set; } /// <summary> /// Relative start point of the line segment /// </summary> public Vector2 RelativeStartPoint { get; set; } /// <summary> /// Relative end point of the line segment /// </summary> public Vector2 RelativeEndPoint { get; set; } /// <summary> /// Property that gets the absolute position of the starting point of the line segment /// </summary> public Vector2 AbsoluteStartPoint { get { return ParentPosition + RelativeStartPoint; } }//end of AbsoluteStartPoint /// <summary> /// Gets the absolute position of the end point of the line segment /// </summary> public Vector2 AbsoluteEndPoint { get { return ParentPosition + RelativeEndPoint; } }//end of AbsoluteEndPoint public Vector2 NormalizedLeftNormal { get { Vector2 P = AbsoluteEndPoint - AbsoluteStartPoint; P.Normalize(); float x = P.X; float y = P.Y; return new Vector2(-y, x); } }//end of NormalizedLeftNormal //--------------------------------------------------------------------------------------------------------------------------- // Class Constructors /// <summary> /// Sole ctor /// </summary> /// <param name="parentPosition"></param> /// <param name="relStart"></param> /// <param name="relEnd"></param> public PolyLine(Vector2 parentPosition, Vector2 relStart, Vector2 relEnd) { ParentPosition = parentPosition; RelativeEndPoint = relEnd; RelativeStartPoint = relStart; }//end of ctor }//end of PolyLine class public static bool Collided(CircleSprite circle, ConvexPolygonSprite poly) { var distance = Vector2.Dot(circle.Position - poly.LineSegments[0].AbsoluteEndPoint, poly.LineSegments[0].NormalizedLeftNormal) + circle.Radius; if (distance <= 0) { return false; } else { return true; } }//end of collided

    Read the article

  • How do I prevent ISPs from killing downloads of files in mid-transfer?

    - by Gorchestopher H
    I run a small website with a few users, low traffic, mostly to share personal mp3 files with a small community. Depending on their ISP, my users can't always download or stream larger files. By larger I mean larger than 1MB. Essentially the host either stops sending, or the client stops receiving. One of the links along the connection chain simply ends its connection before the transfer completes Trace-route shows no connection issues. There are no connection issues with short transfers that don't take more than a few seconds. It's these 10 second transfers that just end up ending. Just doing a straight download with a direct link can yield this error if you have the wrong ISP. Strangely enough, this is most common with users with ISPs who are essentially independent providers that buy service via a fiber link. Unfortunately these providers aren't very knowledgeable, are unable to do any testing, and insist it's a problem with the host. I have gotten my host to transfer my site to different servers of their, to the same effect. Nearly identical sites (affiliate sites actually) experience no such issue. What can I be doing to further troubleshoot this matter? How can I prove that someone is dropping the ball, and identify who that party is? Can I do a 5Mb traceroute? EDIT Maybe I can clear up some misconceptions with my question: The files are not very large. They are simply over 2Mb. The users do not have "slow" connections, they are at least 5mbps. This "time out" happens very quickly, in the realm of 5 seconds, so I don't know if it's a timeout or not. The user often gets 1 or 2Mb in this chunk of time. I have tried streaming with a flash player. I have tried saving the target. Forcing the download. I have tried allowing the browser to stream the file. I have tried different browsers (FF, IE, Chrome). Users are able to download identical files when on different hosts.

    Read the article

  • Selenium &ndash; Use Data Driven tests to run in multiple browsers and sizes

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2013/11/04/selenium-ndash-use-data-driven-tests-to-run-in-multiple.aspxSelenium uses WebDriver (or is it the same? I’m still learning how it is connected) to run Automated UI tests in many different browsers. For example, you can run the same test in Chrome and Firefox and in a smaller sized Chrome browser. The permutations can grow quickly. One way to get them to run in MStest is to create  a test method for each test (ie ChromeDeleteItem_Small, ChromeDeleteItem_Large, FFDeleteItem_Small, FFDeleteItem_Large) that each call  the same base method, passing in the browser and size you’d like. This approach was causing a lot of duplicate code, so I decided to use the data driven approach, common to Coded UI or Unit test methods. 1. Create a class with a test method. 2. Create a csv with two columns: BrowserType, BrowserSize 3. Add rows for each permutation: Chrome, Large | Chrome, Small | Firefox, Large | Firefox, Small | IE, Large | IE, Small | *** 4. Add the csv to the Visual Studio Project. 5. Set the Copy to output directory to Copy always 6. Add the attribute: [DataSource("Microsoft.VisualStudio.TestTools.DataSource.CSV", "|DataDirectory|\\TestMatrix.csv", "TestMatrix#csv", DataAccessMethod.Sequential), DeploymentItem("TestMatrix.csv")] 7. Run the test in the test explorer Example:[CodedUITest] public class AllTasksTests : TasksTestBase { [TestMethod] [TestCategory("Tasks")] [DataSource("Microsoft.VisualStudio.TestTools.DataSource.CSV", "|DataDirectory|\\TestMatrix.csv", "TestMatrix#csv", DataAccessMethod.Sequential), DeploymentItem("TestMatrix.csv")] public void CreateTask() { this.PrepForDataDrivenTest(); base.CreateTaskTest("New Task"); } } protected void PrepForDataDrivenTest() { var browserType = this.ParseBrowserType(Context.DataRow["BrowserType"].ToString()); var browserSize = this.ParseBrowserSize(Context.DataRow["BrowserSize"].ToString()); this.BrowserType = browserType; this.BrowserSize = browserSize; Trace.WriteLine("browser: " + browserType.ToString()); Trace.WriteLine("browser size: " + browserSize.ToString()); } /// <summary> /// Get the enum value from the string /// </summary> /// <param name="browserType">Chrome, Firefox, or IE</param> /// <returns>The browser type.</returns> private BrowserType ParseBrowserType(string browserType) { return (UITestFramework.BrowserType)Enum.Parse(typeof(UITestFramework.BrowserType), browserType, true); } /// <summary> /// Get the browser size enum value from the string /// </summary> /// <param name="browserSize">Small, Medium, Large</param> /// <returns>the browser size</returns> private BrowserSizeEnum ParseBrowserSize(string browserSize) { return (BrowserSizeEnum)Enum.Parse(typeof(BrowserSizeEnum), browserSize, true); }/// <summary> /// Change the browser to the size based on the enum. /// </summary> /// <param name="browserSize">The BrowserSizeEnum value to resize the window to.</param> private void ResizeBrowser(BrowserSizeEnum browserSize) { switch (browserSize) { case BrowserSizeEnum.Large: this.driver.Manage().Window.Maximize(); break; case BrowserSizeEnum.Medium: this.driver.Manage().Window.Size = new Size(800, this.driver.Manage().Window.Size.Height); break; case BrowserSizeEnum.Small: this.driver.Manage().Window.Size = new Size(500, this.driver.Manage().Window.Size.Height); break; default: break; } }/// <summary> /// Browser sizes for automation testing /// </summary> public enum BrowserSizeEnum { /// <summary> /// Large size, Maximized to the desktop /// </summary> Large, /// <summary> /// Similar to tablets /// </summary> Medium, /// <summary> /// Phone sizes... 610px and smaller /// </summary> Small } Hope it helps!

    Read the article

  • SharePoint 2010 HierarchicalConfig Caching Problem

    - by Damon
    We've started using the Application Foundations for SharePoint 2010 in some of our projects at work, and I came across a nasty issue with the hierarchical configuration settings.  I have some settings that I am storing at the Farm level, and as I was testing my code it seemed like the settings were not being saved - at least that is what it appeared was the case at first.  However, I happened to reset IIS and the settings suddenly appeared.  Immediately, I figured that it must be a caching issue and dug into the code base.  I found that there was a 10 second caching mechanism in the SPFarmPropertyBag and the SPWebAppPropertyBag classes.  So I ran another test where I waited 10 seconds to make sure that enough time had passed to force the caching mechanism to reset the data.  After 10 minutes the cache had still not cleared.  After digging a bit further, I found a double lock check that looked a bit off in the GetSettingsStore() method of the SPFarmPropertyBag class: if (_settingStore == null || (DateTime.Now.Subtract(lastLoad).TotalSeconds) > cacheInterval)) { //Need to exist so don't deadlock. rrLock.EnterWriteLock(); try { //make sure first another thread didn't already load... if (_settingStore == null) { _settingStore = WebAppSettingStore.Load(this.webApplication); lastLoad = DateTime.Now; } } finally { rrLock.ExitWriteLock(); } } What ends up happening here is the outer check determines if the _settingStore is null or the cache has expired, but the inner check is just checking if the _settingStore is null (which is never the case after the first time it's been loaded).  Ergo, the cached settings are never reset.  The fix is really easy, just add the cache checking back into the inner if statement. //make sure first another thread didn't already load... if (_settingStore == null || (DateTime.Now.Subtract(lastLoad).TotalSeconds) > cacheInterval) { _settingStore = WebAppSettingStore.Load(this.webApplication); lastLoad = DateTime.Now; } And then it starts working just fine. as long as you wait at least 10 seconds for the cache to clear.

    Read the article

  • WebLogic Partner Community Newsletter October 2012

    - by JuergenKress
    Dear WebLogic partner community member Oracle OpenWorld and the JavaOne is just over with lots of product updates and highlights. In this newsletter you will find the key information on many new product and launches. Make sure you download the presentation from our WebLogic Community Workspace (WebLogic Community membership required), to train yourself and for your next customer meeting. Thanks for all the tweets tweets #WebLogicCommunity, the pictures at our facebook page and the nice blog posts from Guido & Lucas & Jan. Java One was a super sucess - JavaOne 2012: Strategy and Technical Keynote - Java 2,5 years after the acquisition - IDC report - make the future Java! If you want to become a Java Expert, make sure you attend one of our WebLogic 12c Bootcamps or our fist ExaLogic Hackers Night - November 19th Nürnberg Germany. All developers can use WebLogic free of charge! For developers, there are lots of ADF news on Oracle ADF Essentials & ADF training material now on the iPad By Grant Ronald & GlassFish Extension for Oracle JDeveloper & Installing, Configuring, and Testing WebLogic Server 12c Developer Zip Distribution in NetBeans. If you want to become a certified WebLogic company, WebLogic Server 12c Specialization is now available for you. You just need to go to the Knowledge Zone section, select the “Specialization” tab and click on “Apply Now” Now available: WebLogic Server 12c Implementation Specialist Boot Camp LVT. Now in Production: Oracle WebLogic Server 12c Implementation Specialist certification (1Z0-599) In our specialization benefit series we highlight this month the opportunity to promote your WebLogic services by google ads. Torsten Winterberg, OFM ACE Director published Mobile Web Applications – A guide for professional development. Please feel free to let us know if you publish a book or article! Hope to see you at the Middleware Day at UK Oracle User Group Conference 2012 in Birmingham. Jürgen Kress Oracle WebLogic Partner Adoption EMEA To read the newsletter please visit http://tinyurl.com/WebLogicnewsOctober2012 (OPN Account required) To become a member of the WebLogic Partner Community please register at http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic Community newsletter,newsletter,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • How to make a stack stable? Need help for an explicit resting contact scheme (2-dimensional)

    - by Register Sole
    Previously, I struggle with the sequential impulse-based method I developed. Thanks to jedediah referring me to this paper, I managed to rebuild the codes and implement the simultaneous impulse based method with Projected-Gauss-Seidel (PGS) iterative solver as described by Erin Catto (mentioned in the reference of the paper as [Catt05]). So here's how it currently is: The simulation handles 2-dimensional rotating convex polygons. Detection is using separating-axis test, with a SKIN, meaning closest points between two polygons is detected and determined if their distance is less than SKIN. To resolve collision, simultaneous impulse-based method is used. It is solved using iterative solver (PGS-solver) as in Erin Catto's paper. Error-correction is implemented using Baumgarte's stabilization (you can refer to either paper for this) using J V = beta/dt*overlap, J is the Jacobian for the constraints, V the matrix containing the velocities of the bodies, beta an error-correction parameter that is better be < 1, dt the time-step taken by the engine, and overlap, the overlap between the bodies (true overlap, so SKIN is ignored). However, it is still less stable than I expected :s I tried to stack hexagons (or squares, doesn't really matter), and even with only 4 to 5 of them, they would swing! Also note that I am not looking for a sleeping scheme. But I would settle if you have any explicit scheme to handle resting contacts. That said, I would be more than happy if you have a way of treating it generally (as continuous collision, instead of explicitly as a special state). Ideas I have tried: Using simultaneous position based error correction as described in the paper in section 5.3.2, turned out to be worse than the current scheme. If you want to know the parameters I used: Hexagons, side 50 (pixels) gravity 2400 (pixels/sec^2) time-step 1/60 (sec) beta 0.1 restitution 0 to 0.2 coeff. of friction 0.2 PGS iteration 10 initial separation 10 (pixels) mass 1 (unit is irrelevant for now, i modified velocity directly<-impulse method) inertia 1/1000 Thanks in advance! I really appreciate any help from you guys!! :) EDIT In response to Cholesky's comment about warm starting the solver and Baumgarte: Oh right, I forgot to mention! I do save the contact history and the impulse determined in this time step to be used as initial guess in the next time step. As for the Baumgarte, here's what actually happens in the code. Collision is detected when the bodies' closest distance is less than SKIN, meaning they are actually still separated. If at this moment, I used the PGS solver without Baumgarte, restitution of 0 alone would be able to stop the bodies, separated by a distance of ~SKIN, in mid-air! So this isn't right, I want to have the bodies touching each other. So I turn on the Baumgarte, where its role is actually to pull the bodies together! Weird I know, a scheme intended to push the body apart becomes useful for the reverse. Also, I found that if I increase the number of iteration to 100, stacks become much more stable, though the program becomes so slow. UPDATE Since the stack swings left and right, could it be something is wrong with my friction model? Current friction constraint: relative_tangential_velocity = 0

    Read the article

  • How to configure a longer version Number in artifactory

    - by claudine
    The version-numbers for our jars have to be longer them x.x.x. We would rather need x.x.x.x to integrate some old-fashioned self-made mechanism. This is, because we tag our software with x.x.x and as soon as we have a delivery to a customer one specific jar has to be build exactly at this point of time to fit to another backend, which communicates with our program. For that reason this one jar has the version 2.3.4.1, when generated and in next delivery of the same Version it is build and named 2.3.4.2. Now artifactory cannot handle this an doesn't save more than x.x.x.2 in some cases. So we thought of maybe edit the regular expression in the maven repository layout (see attached Screenshot) Because testing the path in the field below shows, that it cannot handle the version number. Of course for the rest of our jars still x.x.x has to work.. For Example here is the maven-metadata.xml <?xml version="1.0" encoding="UTF-8"?> <metadata> <groupId>com.firm</groupId> <artifactId>someid</artifactId> <version>1.5.1</version> <versioning> <latest>1.5.1</latest> <release>1.5.1</release> <versions> <version>1.4.62</version> </versions> <lastUpdated>20120926073942</lastUpdated> </versioning> </metadata> The folder structure looks like: someid 1.4.62 1.4.62.1 1.4.62.2 1.4.62.3 If we deploy an new artifact version (1.4.62.1), the maven-metadata.xml contains the 1.4.62.1 version. But the artifactory overrides the version number (1.4.62.x) to (1.4.62) after an unspecified time. It seems that the artifactory only support major, minor and revision numbers, and deletes the buildnumber. Now we looking for a solution do disable this behavior. We use the JFrog Artifactory version 2.5.0 (rev. 13086). Any ideas, maybe? Thanks in andvance

    Read the article

  • Should we persist with an employee still writing bad code after many years?

    - by user94986
    I've been assigned the task of managing developers for a well-established company. They have a single developer who specialises in all their C++ coding (since forever), but the quality of the work is abysmal. Code reviews and testing have revealed many problems, one of the worst being memory leaks. The developer has never tested his code for leaks, and I discovered that the applications could leak many MBs with only a minute of use. User's were reporting huge slowdowns, and his take was, "it's nothing to do with me - if they quit and restart, it's all good again." I've given him tools to detect and trace the leaks, and sat down with him for many hours to demonstrate how the tools are used, where the problems occur, and what to do to fix them. We're 6 months down the track, and I assigned him to write a new module. I reviewed it before it was integrated into our larger code base, and was dismayed to discover the same bad coding as before. The part that I find incomprehensible is that some of the coding is worse than amateurish. For example, he wanted a class (Foo) that could populate an object of another class (Bar). He decided that Foo would hold a reference to Bar, e.g.: class Foo { public: Foo(Bar& bar) : m_bar(bar) {} private: Bar& m_bar; }; But (for other reasons) he also needed a default constructor for Foo and, rather than question his initial design, he wrote this gem: Foo::Foo() : m_bar(*(new Bar)) {} So every time the default constructor is called, a Bar is leaked. To make matters worse, Foo allocates memory from the heap for 2 other objects, but he didn't write a destructor or copy constructor. So every allocation of Foo actually leaks 3 different objects, and you can imagine what happened when a Foo was copied. And - it only gets better - he repeated the same pattern on three other classes, so it isn't a one-off slip. The whole concept is wrong on so many levels. I would feel more understanding if this came from a total novice. But this guy has been doing this for many years and has had very focussed training and advice over the past few months. I realise he has been working without mentoring or peer reviews most of that time, but I'm beginning to feel he can't change. So my question is, would you persist with someone who is writing such obviously bad code?

    Read the article

  • Multiple Zend application code organisation

    - by user966936
    For the past year I have been working on a series of applications all based on the Zend framework and centered on a complex business logic that all applications must have access to even if they don't use all (easier than having multiple library folders for each application as they are all linked together with a common center). Without going into much detail about what the project is specifically about, I am looking for some input (as I am working on the project alone) on how I have "grouped" my code. I have tried to split it all up in such a way that it removes dependencies as much as possible. I'm trying to keep it as decoupled as I logically can, so in 12 months time when my time is up anyone else coming in can have no problem extending on what I have produced. Example structure: applicationStorage\ (contains all applications and associated data) applicationStorage\Applications\ (contains the applications themselves) applicationStorage\Applications\external\ (application grouping folder) (contains all external customer access applications) applicationStorage\Applications\external\site\ (main external customer access application) applicationStorage\Applications\external\site\Modules\ applicationStorage\Applications\external\site\Config\ applicationStorage\Applications\external\site\Layouts\ applicationStorage\Applications\external\site\ZendExtended\ (contains extended Zend classes specific to this application example: ZendExtended_Controller_Action extends zend_controller_Action ) applicationStorage\Applications\external\mobile\ (mobile external customer access application different workflow limited capabilities compared to full site version) applicationStorage\Applications\internal\ (application grouping folder) (contains all internal company applications) applicationStorage\Applications\internal\site\ (main internal application) applicationStorage\Applications\internal\mobile\ (mobile access has different flow and limited abilities compared to main site version) applicationStorage\Tests\ (contains PHP unit tests) applicationStorage\Library\ applicationStorage\Library\Service\ (contains all business logic, services and servicelocator; these are completely decoupled from Zend framework and rely on models' interfaces) applicationStorage\Library\Zend\ (Zend framework) applicationStorage\Library\Models\ (doesn't know services but is linked to Zend framework for DB operations; contains model interfaces and model datamappers for all business objects; examples include Iorder/IorderMapper, Iworksheet/IWorksheetMapper, Icustomer/IcustomerMapper) (Note: the Modules, Config, Layouts and ZendExtended folders are duplicated in each application folder; but i have omitted them as they are not required for my purposes.) For the library this contains all "universal" code. The Zend framework is at the heart of all applications, but I wanted my business logic to be Zend-framework-independent. All model and mapper interfaces have no public references to Zend_Db but actually wrap around it in private. So my hope is that in the future I will be able to rewrite the mappers and dbtables (containing a Models_DbTable_Abstract that extends Zend_Db_Table_Abstract) in order to decouple my business logic from the Zend framework if I want to move my business logic (services) to a non-Zend framework environment (maybe some other PHP framework). Using a serviceLocator and registering the required services within the bootstrap of each application, I can use different versions of the same service depending on the request and which application is being accessed. Example: all external applications will have a service_auth_External implementing service_auth_Interface registered. Same with internal aplications with Service_Auth_Internal implementing service_auth_Interface Service_Locator::getService('Auth'). I'm concerned I may be missing some possible problems with this. One I'm half-thinking about is a config.ini file for all externals, then a separate application config.ini overriding or adding to the global external config.ini. If anyone has any suggestions I would be greatly appreciative. I have used contextswitching for AJAX functions within the individual applications, but there is a big chance both external and internal will get web services created for them. Again, these will be separated due to authorization and different available services. \applicationstorage\Applications\internal\webservice \applicationstorage\Applications\external\webservice

    Read the article

  • (Android) How are OpenGL ES 1 framebuffers and textures sized?

    - by jens
    I am trying to draw to a texture using a framebuffer using OpenGL ES 1.1 on Android, Java. Afterwords I want to overlay this texture full-screen over my game. In theory, this works like a charm, but somehow the coordinates are off. For testing I drew something at (0,0) with width and height 200, and it partly is off-screen. This is how I create the framebuffer: fb = new int[1]; depthRb = new int[1]; renderTex = new int[1]; gl11ep.glGenFramebuffersOES(1, fb, 0); gl11ep.glGenRenderbuffersOES(1, depthRb, 0); // the depth buffer gl.glGenTextures(1, renderTex, 0);// generate texture gl.glBindTexture(GL10.GL_TEXTURE_2D, renderTex[0]); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT); texBuffer = ByteBuffer.allocateDirect(buf.length*4).order(ByteOrder.nativeOrder()).asIntBuffer(); gl.glTexImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_LUMINANCE, texW, texH, 0, GL10.GL_LUMINANCE, GL10.GL_UNSIGNED_BYTE, texBuffer); gl11ep.glBindRenderbufferOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); gl11ep.glRenderbufferStorageOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, GL11ExtensionPack.GL_DEPTH_COMPONENT16, texW, texH); Before I draw, I do this: gl11ep.glBindFramebufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, fb[0]); gl.glClearColor(0f, 0f, 0f, 0f); // specify texture as color attachment gl11ep.glFramebufferTexture2DOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_COLOR_ATTACHMENT0_OES, GL10.GL_TEXTURE_2D, renderTex[0], 0); // attach render buffer as depth buffer gl11ep.glFramebufferRenderbufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_DEPTH_ATTACHMENT_OES, GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); I set texW = 1024 and texH = 512. When rendering this texture fullscreen, with a lightmask (size 200x200) placed at (0, 0) and (texW/2, texH/2). You can see that it seems like the coordinate system doesnt start at (0,0) as that light overlaps the screen and the images are not drawn as squares (my lightcone-texture is a circle, not an ellipse). So, how is the coordinate system of this offscreen-drawn texture defined? Thanks

    Read the article

  • Redesigning an Information System - Part 1

    - by dbradley
    Through the next few weeks or months I'd like to run a small series of articles sharing my experiences from the largest of the project I've worked on and explore some of the real-world problems I've come across and how we went about solving them. I'm afraid I can't give too many specifics on the project right now as it's not yet complete so you'll have to forgive me for being a little abstract in places! To start with I'm going to run through a little of the background of the problem and the motivations to re-design from scratch. Then I'll work through the approaches taken to understanding the requirements, designing, implementing, testing and migrating to the new system. Motivations for Re-designing a Large Information System The system is one that's been in place for a number of years and was originally designed to do a significantly different one to what it's now being used for. This is mainly due to the product maturing as well as client requirements changing. As with most information systems this one can be defined in four main areas of functionality: Input – adding information to the system Storage – persisting information in an efficient, searchable structure Output – delivering the information to the client Control – management of the process There can be a variety of reasons to re-design an existing system; a few of our own turned out to be factors such as: Overall system reliability System response time Failure isolation and recovery Maintainability of code and information General extensibility to solve future problem Separation of business and product concerns New or improved features The factor that started the thought process was the desire to improve the way in which information was entered into the system. However, this alone was not the entire reason for deciding to redesign. Business Drivers Typically all software engineers would always prefer to do a project from scratch themselves. It generally means you don't have to deal with problems created by predecessors and you can create your own absolutely perfect solution. However, the reality of working within a business is that the bottom line comes down to return on investment. For a medium sized business such as mine there must be actual value able to be delivered within a reasonable timeframe for any work to be started. As a result, any long term project will generally take a lot of effort and consideration to be approved by those in charge and therefore it might be better to break down the project into more manageable chunks which allow more frequent deliverables and also value within a shorter timeframe. As the only thing of concern was the methods for inputting information, this is where we started with requirements gathering and design. However knowing that there might be more to the problem and not limiting your design decisions before the requirements is key to finding the best solutions.

    Read the article

  • What is the standard term for my role?

    - by sigil
    I'm doing work that involves writing code and managing developers in a "special projects" division of a large company. I'd like to define my role better and figure out if there's an industry standard term for what I do, so that it will be easier for me to research best practices and work on a career path What I do all day: A macro that connects an Excel sheet to an Access database is acting funny; I get called in to figure out what's happening and debug it. Someone needs data extracted from a bunch of files on Sharepoint. I figure out a client-side solution because I'm not authorized to do anything server-side and getting IT to do anything would take several months and need a business case. A manager wants a new data entry tool for their team. I interview the manager and team members to work out the functional requirements, then design/develop/test the application. Someone needs a VBA script to crunch some data for their presentation that's due in two hours. I drop everything I'm doing to hack out a quick script and run the analysis, without much in the way of testing. A developer has been hired to build a database for one of the teams, since I'm working on too many different things and don't have time to take this project on in the timeframe required. I direct his work and push him to meet certain deadlines, interview stakeholders to get more info that will help him figure out how to build the necessary forms, and modify the functional requirements of the database to fit in the timeframe. Someone wants to load a set of data into a GIS system and set up an ongoing refresh and reporting of this data set. I facilitate the conversation between the GIS developers and the owners of this data set, and design a demo application as proof of concept. It's kind of an "all-purpose programming and IT management" position, but it's not officially IT because the company has an actual IT department with a rigorously defined system of submitting requests, developing code, and managing projects. What I do, I guess, is more of a handyman job, where stuff falls to me because I'm the geekiest one in the room. Is there a standard term in the software world for what I do?

    Read the article

  • Oracle Data Protection: How Do You Measure Up? - Part 1

    - by tichien
    This is the first installment in a blog series, which examines the results of a recent database protection survey conducted by Database Trends and Applications (DBTA) Magazine. All Oracle IT professionals know that a sound, well-tested backup and recovery strategy plays a foundational role in protecting their Oracle database investments, which in many cases, represent the lifeblood of business operations. But just how common are the data protection strategies used and the challenges faced across various enterprises? In January 2014, Database Trends and Applications Magazine (DBTA), in partnership with Oracle, released the results of its “Oracle Database Management and Data Protection Survey”. Two hundred Oracle IT professionals were interviewed on various aspects of their database backup and recovery strategies, in order to identify the top organizational and operational challenges for protecting Oracle assets. Here are some of the key findings from the survey: The majority of respondents manage backups for tens to hundreds of databases, representing total data volume of 5 to 50TB (14% manage 50 to 200 TB and some up to 5 PB or more). About half of the respondents (48%) use HA technologies such as RAC, Data Guard, or storage mirroring, however these technologies are deployed on only 25% of their databases (or less). This indicates that backups are still the predominant method for database protection among enterprises. Weekly full and daily incremental backups to disk were the most popular strategy, used by 27% of respondents, followed by daily full backups, which are used by 17%. Interestingly, over half of the respondents reported that 10% or less of their databases undergo regular backup testing.  A few key backup and recovery challenges resonated across many of the respondents: Poor performance and impact on productivity (see Figure 1) 38% of respondents indicated that backups are too slow, resulting in prolonged backup windows. In a similar vein, 23% complained that backups degrade the performance of production systems. Lack of continuous protection (see Figure 2) 35% revealed that less than 5% of Oracle data is protected in real-time.  Management complexity 25% stated that recovery operations are too complex. (see Figure 1)  31% reported that backups need constant management. (see Figure 1) 45% changed their backup tools as a result of growing data volumes, while 29% changed tools due to the complexity of the tools themselves. Figure 1: Current Challenges with Database Backup and Recovery Figure 2: Percentage of Organization’s Data Backed Up in Real-Time or Near Real-Time In future blogs, we will discuss each of these challenges in more detail and bring insight into how the backup technology industry has attempted to resolve them.

    Read the article

  • .Net Windows Service Throws EventType clr20r3 system.data.sqlclient.sql error

    - by William Edmondson
    I have a .Net/c# 2.0 windows service. The entry point is wrapped in a try catch block yet when I look at the server's application event log I seem a number of "EventType clr20r3" errors that are causing the service to die unexpectedly. The catch block has a "catch (Exception ex)". Each sql commands is of the type "CommandType.StoredProcedure" and are executed with SqlDataReader's. These sproc calls function correctly 99% of time and have all been thoroughly unit tested, profiled, and QA'd. I additionally wrapped these calls in try catch blocks just to be sure and am still experiencing these unhandled exceptions. This only in our production environment and cannot be duplicated in our dev or staging environments (even under heavy load). Why would my error handling not catch this particular error? Is there anyway to capture more detail as to the root cause of the problem? Here is an example of the event log: EventType clr20r3, P1 RDC.OrderProcessorService, P2 1.0.0.0, P3 4ae6a0d0, P4 system.data, P5 2.0.0.0, P6 4889deaf, P7 2490, P8 2c, P9 system.data.sqlclient.sql, P10 NIL. Additionally The Order Processor service terminated unexpectedly. It has done this 1 time(s). The following corrective action will be taken in 60000 milliseconds: Restart the service.

    Read the article

  • What good technology podcasts are out there?

    - by Michael Stum
    Yes, Podcasts, those nice little Audiobooks I can listen to on the way to work. With the current amount of Podcasts, it's like searching a needle in a haystack, except that the haystack happens to be the Internet and is filled with too many of these "Hot new Gadgets" stuff :( Now, even though I am mainly a .NET developer nowadays, maybe anyone knows some good Podcasts from people regarding the whole software lifecycle? Unit Testing, Continous Integration, Documentation, Deployment... So - what are you guys and gals listening to? Please note that the categorizations are somewhat subjective and may not be 100% accurate as many podcasts cover several areas. Categorization is made against what is considered the "main" area. General Software Engineering / Productivity Stack Overflow TekPub (Requires Paid Subscription) SE Radio 43 Folders Perspectives Dr. Dobb's (now a video feed) The Pragmatic Podcast (Inactive) IT Matters Agile Toolkit Podcast The Stack Trace (Inactive) Parleys Techzing The Startup Success Podcast Berkeley CS class lectures FOSS Weekly .NET / Visual Studio / Microsoft Herding Code Hanselminutes .NET Rocks! Deep Fried Bytes Alt.Net Podcast Polymorphic Podcast Sparkling Client (The Silverlight Podcast) dnrTV! Spaghetti Code ASP.NET Podcast Channel 9 Radio TFS PowerScripting Podcast The Thirsty Developer Elegant Code ConnectedShow Crafty Coders Coding QA jQuery yayQuery The official jQuery podcast Java / Groovy The Java Posse Grails Podcast Java Technology Insider Ruby / Rails Railscasts Rails Envy The Ruby on Rails Podcast Rubiverse Web Design / JavaScript / Ajax WebDevRadio Boagworld The Rissington podcast Ajaxian YUI Theater Unix / Linux / Mac / iPhone Mac Developer Network Hacker Public Radio Linux Outlaws Mac OS Ken LugRadio Linux radio show (Inactive) The Linux Action Show! Linux Kernel Mailing List (LKML) Summary Podcast Stanford's iPhone programming class SysAdmin, Security or Infrastructure RunAs Radio Security Now! Crypto-Gram Security Podcast Hak5 VMWare VMTN Windows Weekly PaulDotCom Security The Register - Semi-Coherent Computing FeatherCast General Tech / Business Tekzilla This Week in Tech The Guardian Tech Weekly PCMag Radio Podcast Entrepreneurship Corner Manager Tools Other / Misc. / Podcast Networks IT Conversations Retrobits Podcast No Agenda Netcast Cranky Geeks The Command Line Freelance Radio IBM developerWorks The Register - Open Season Drunk and Retired Technometria Sod This Radio4Nerds Hacker Medley

    Read the article

  • Hibernate 3.5-Final in JBoss 5.1.0.GA

    - by Bozhidar Batsov
    Hibernate 3.5-Final is finally here and it offers the much anticipated JPA2 support, amongst other features. I am working on a project(EJB3 based) using JBoss 5.1.0.GA and Hibernate 3.3, but I wanted to take advantage of the JPA2 and tried to upgrade to Hibernate 3.5. What I did was fairly simple and standard - I just put all the hibernate 3.5 jars in the server/configuration(default,all,etc)/lib folder - that way they take precedence over the hibernate artifacts shipped with JBoss. It seems though that JBoss ships with libraries that are dependent on the JPA1 implementation part of the hibernate 3.3, because I started getting some errors about unimplemented abstract methods and stuff like that on deploy: 23:21:26,792 WARN [Ejb3Configuration] Persistence provider caller does not implement the EJB3 spec correctly. PersistenceUnitInfo.getNewTempClassLoader() is null. 23:21:26,792 ERROR [AbstractKernelController] Error installing to Start: name=persistence.unit:unitName=kernel-ear-3.3.0-SNAPSHOT.ear/config-persistence.jar#ConfigurationPersistenceUnit state=Create java.lang.AbstractMethodError: org.jboss.jpa.deployment.PersistenceUnitInfoImpl.getValidationMode()Ljavax/persistence/ValidationMode; at org.hibernate.ejb.Ejb3Configuration.configure(Ejb3Configuration.java:613) at org.hibernate.ejb.HibernatePersistence.createContainerEntityManagerFactory(HibernatePersistence.java:72) at org.jboss.jpa.deployment.PersistenceUnitDeployment.start(PersistenceUnitDeployment.java:301) at sun.reflect.GeneratedMethodAccessor308.invoke(Unknown Source) Maybe I should use a different persistence provided? Currently it's: org.hibernate.ejb.HibernatePersistence I looked around the net and didn't find any documented upgrade paths. There was even an unanswered question here in stack overflow on the topic. Any ideas, suggestions? Thanks in advance for your help.

    Read the article

  • Telerik ASP.NET AJAX - Ajax Update Label with dynamic created Docks

    - by csharpnoob
    Hi, i try to Update a simple Label on Close Event of dynamic created RadDock. Works fine so far, Label gets the correct values but doesnt updates it. RadDock dock = new RadDock(); dock.DockMode = DockMode.Docked; dock.UniqueName = Guid.NewGuid().ToString(); dock.ID = string.Format("RadDock{0}", dock.UniqueName); dock.Title = slide.slideName; dock.Text = string.Format("Added at {0}", DateTime.Now); dock.Width = Unit.Pixel(300); dock.AutoPostBack = true; dock.CommandsAutoPostBack = true; dock.Command += new DockCommandEventHandler(dock_Command); ... void dock_Command(object sender, DockCommandEventArgs e) { Status.Text = "Removed " + ((RadDock)sender).Title + " " + ((RadDock)sender).Text; } I tried to do this: RadAjaxManager1.AjaxSettings.AddAjaxSetting(dock, Status, null); while creating the docks, but on runtime i get a NullReference Excepetion. On a Button registered with the RadAjaxManager it works to show the value assigned by dock_command. protected void Button1_Click(object sender, EventArgs e) { Status.Text = Status.Text; } UPDATE: The RadAjaxManager was created with integrated Wizzard of VS2008. Can't select the Docks, because the are generated while runtime. On Backend its included in AutoCompletion, so the NullReference has nothing to do with the AjaxManager itself. Like i said, works fine with the Button. <telerik:RadAjaxManager ID="RadAjaxManager1"> <telerik:AjaxSetting AjaxControlID="Button1"> <UpdatedControls> <telerik:AjaxUpdatedControl ControlID="Label1"></telerik:AjaxUpdatedControl> </UpdatedControls> </telerik:AjaxSetting>

    Read the article

  • How do I solve column width problems in a SSRS Tablix?

    - by David Stein
    I'm creating a simple report from Microsoft Dynamics CRM. When I pull in the following dataset: SELECT FQD.productidname , FQD.NEW_PRICEBREAKS , FQD.NEW_WEEKSARO , ltrim(rtrim(FP.NEW_PRODUCTNAME)) AS NewProductDesc , FQD.productdescription , FQD.quoteid , FQD.quantity , FQD.productiddsc , FQD.baseamount , FQD.lineitemnumber , FQD.priceperunit , FQD.extendedamount , ISNULL(FP.productnumber, '') AS productnumber , ISNULL(FQD.uomidname, '-') AS Unit , FQD.tax AS Tax , FQD.volumediscountamount * FQD.quantity AS Discount , FQD.manualdiscountamount AS MDiscount , FQD.quotedetailid , FQD.crm_moneyformatstring , FQD.NEW_PRICEPERUNIT , FQD.NEW_PRICEPERUNIT_BASE FROM FilteredQuoteDetail FQD LEFT OUTER JOIN FilteredProduct FP ON FQD.productid = FP.productid WHERE (FQD.quoteid = @CRM_QuoteId) The NewProductDesc field is too wide. If I shorted it in design view, it still comes out too wide in the presentation. I think the field is coming out that wide because the database field probably has a bunch of blank spaces at the end of every description. I could not find a way to force that field in the Tablix not to grow horizontally, so I attempted to remedy it in the dataset by replacing the NewProductDesc line with: ltrim(rtrim(FP.NEW_PRODUCTNAME)) AS NewProductDesc However, that has no effect either. Can anyone suggest why this behavior is occuring? Can anyone tell me how I can force the field not to grow horizontally?

    Read the article

  • How to resolve java.nio.charset.UnmappableCharacterException in Scala 2.8.0?

    - by Roman Kagan
    I'm using Scala 2.8.0 and trying to read pipe delimited file like in code snipped below: object Main { def main(args: Array[String]) :Unit = { if (args.length > 0) { val lines = scala.io.Source.fromPath("QUICK!LRU-2009-11-15.psv") for (line <-lines) print(line) } } } Here's the error: Exception in thread "main" java.nio.charset.UnmappableCharacterException: Input length = 1 at java.nio.charset.CoderResult.throwException(CoderResult.java:261) at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:319) at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:158) at java.io.InputStreamReader.read(InputStreamReader.java:167) at java.io.BufferedReader.fill(BufferedReader.java:136) at java.io.BufferedReader.read(BufferedReader.java:157) at scala.io.BufferedSource$$anonfun$1$$anonfun$apply$1.apply(BufferedSource.scala:29) at scala.io.BufferedSource$$anonfun$1$$anonfun$apply$1.apply(BufferedSource.scala:29) at scala.io.Codec.wrap(Codec.scala:65) at scala.io.BufferedSource$$anonfun$1.apply(BufferedSource.scala:29) at scala.io.BufferedSource$$anonfun$1.apply(BufferedSource.scala:29) at scala.collection.Iterator$$anon$14.next(Iterator.scala:149) at scala.collection.Iterator$$anon$2.next(Iterator.scala:745) at scala.collection.Iterator$$anon$2.head(Iterator.scala:732) at scala.collection.Iterator$$anon$24.hasNext(Iterator.scala:405) at scala.collection.Iterator$$anon$20.hasNext(Iterator.scala:320) at scala.io.Source.hasNext(Source.scala:209) at scala.collection.Iterator$class.foreach(Iterator.scala:534) at scala.io.Source.foreach(Source.scala:143) ... at infillreports.Main$.main(Main.scala:8) at infillreports.Main.main(Main.scala) Java Result: 1

    Read the article

  • "The specified view is invalid" in call to LimitedWebPartManager.AddWebPart in SharePoint 2010

    - by Lee Richardson
    This code used to work in WSS 3.0 / MOSS 2007 in FeatureReceiver.FeatureActivated: using (SPLimitedWebPartManager limitedWebPartManager = Site.GetLimitedWebPartManager("default.aspx", PersonalizationScope.Shared)) { ListViewWebPart listViewWebPart = new ListViewWebPart { Title = title, ListName = list.ID.ToString("B").ToUpper(), ViewGuid = view.ID.ToString("B").ToUpper() }; limitedWebPartManager.AddWebPart(listViewWebPart, zone, position); } I'm trying to convert to SharePoint 2010 and it now fails with: System.ArgumentException: The specified view is invalid. at Microsoft.SharePoint.SPViewCollection.get_Item(Guid guid) at Microsoft.SharePoint.WebPartPages.ListViewWebPart.EnsureListAndView(Boolean requireFullBlownViewSchema) at Microsoft.SharePoint.WebPartPages.ListViewWebPart.get_AppropriateBaseViewId() at Microsoft.SharePoint.WebPartPages.SPWebPartManager.AddWebPartInternal(SPSupersetWebPart superset, Boolean throwIfLocked) at Microsoft.SharePoint.WebPartPages.SPLimitedWebPartManager.AddWebPartInternal(WebPart webPart, String zoneId, Int32 zoneIndex, Boolean throwIfLocked) at Microsoft.SharePoint.WebPartPages.SPLimitedWebPartManager.AddWebPart(WebPart webPart, String zoneId, Int32 zoneIndex) Interestingly enough when I run it from a unit test it works, it only fails in FeatureActivated. When I debug with Reflector it is failing on this line: this.view = this.list.LightweightViews[new Guid(this.ViewGuid)]; list.LightweightViews only returns one view, the default view, even though list.Views returns all of them. I have no idea what LightweightViews is supposed to mean and I'm running out of ideas. Anyone else got any?

    Read the article

  • Getting MSDeploy working on our build/integration server - Is an MSBuild upgrade necessary?

    - by Jeff D
    We have what I think is a fairly standard build process: 1. Developer: Check in code 2. Build: Polls repo, sees change, and kicks off build that: 3. Build: Updates from repo, Builds w/ MSBuild, Runs unit tests w/ nunit, 4. Build: creates installer package Our security team allows us to pull from the build server, but does not allow the build server to push. So we generally rdp in, d/l the installers, and run them, which rules out the slick deployment services, so I would need to generate packages instead. I'd like to use MSDeploy, except that we have the following issues: We're on .net 3.5, and the MSBuild target (Package) that uses MSDeploy requires 4.0. Is there anything I'd need to install other than .net 4.0 RC for this? (Would MSBuild be part of that upgrade?) When I generate packages with MSDeploy, I see that I don't have just 1 file. There's a zip, deploy.cmd, SourceManifest.xml, and SetParameters.xml. What are all the other files for, and why wouldn't they all be in the 'package'? It sounds as if you can create packages by telling the system to look at a working IIS site. But if the packages are build from a CI environment, aren't you basically out of luck here? It feels like they designed some of this for small-scale developers deploying from their dev environment. That's a fine use case, but I'm interested in see what everyone's enterprise-experience is with the tool Any suggestions?

    Read the article

  • ASP.NET MVC2 Access-Control: How to do authorization dynamically?

    - by Shaharyar
    We're currently rewriting our organizations ASP.NET MVC application which has been written twice already. (Once MVC1, once MVC2). (Thank god it wasn't production ready and too mature back then). This time, anyhow, it's going to be the real deal because we'll be implementing more and more features as time passes and the testruns with MVC1 and MVC2 showed that we're ready to upscale. Until now we were using Controller and Action authorization with AuthorizeAttribute's. But that won't do it any longer because our views are supposed to show different results based on the logged in user. Use Case: Let's say you're a major of a city and you login to a federal managed software and you can only access and edit the citizens in your city. Where you are allowed to access those citizens via an entry in a specialized MajorHasRightsForCity table containing a MajorId and a CityId. What I thought of is something like this: Public ViewResult Edit(int cityId) { if(Access.UserCanEditCity(currentUser, cityId) { var currentCity = Db.Cities.Single(c => c.id == cityId); Return View(currentCity); } else { TempData["ErrorMessage"] = "Yo are not awesome enough to edit that shizzle!" Return View(); } The static class Access would do all kinds of checks and return either true or false from it's methods. This implies that I would need to change and edit all of my controllers every time I change something. (Which would be a pain, because all unit tests would need to be adjusted every time something changes..) Is doing something like that even allowed?

    Read the article

  • Compare base class part of sub class instance to another base class instance

    - by Anders Abel
    I have number of DTO classes in a system. They are organized in an inheritance hierarchy. class Person { public int Id { get; set; } public string FirstName { get; set; } public string ListName { get; set; } } class PersonDetailed : Person { public string WorkPhone { get; set; } public string HomePhone { get; set; } public byte[] Image { get; set; } } The reason for splitting it up is to be able to get a list of people for e.g. search results, without having to drag the heavy image and phone numbers along. Then the full PersonDetail DTO is loaded when the details for one person is selected. The problem I have run into is comparing these when writing unit tests. Assume I have Person p1 = myService.GetAPerson(); PersonDetailed p2 = myService.GetAPersonDetailed(); // How do I compare the base class part of p2 to p1? Assert.AreEqual(p1, p2); The Assert above will fail, as p1 and p2 are different classes. Is it possible to somehow only compare the base class part of p2 to p1? Should I implement IEquatable<> on Person? Other suggestions?

    Read the article

< Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >