Search Results

Search found 3994 results on 160 pages for 'implementing'.

Page 151/160 | < Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >

  • Linking LLVM JIT Code to Static LLVM Libraries?

    - by inflector
    I'm in the process of implementing a cross-platform (Mac OS X, Windows, and Linux) application which will do lots of CPU intensive analysis of financial data. The bulk of the analysis engine will be written in C++ for speed reasons, with a user-accessible scripting engine interfacing with the C++ testing engine. I want to write several scripting front-ends over time to emulate other popular software with existing large user bases. The first front will be a VisualBasic-like scripting language. I'm thinking that LLVM would be perfect for my needs. Performance is very important because of the sheer amount of data; it can take hours or days to run a single run of tests to get an answer. I believe that using LLVM will also allow me to use a single back-end solution while I implement different front-ends for different flavors of the scripting language over time. The testing engine itself will be separated from the interface and testing will even take place in a separate process with progress and results being reported to the testing management interface. Tests will consist of scripting code integrated with the testing engine code. In a previous implementation of a similar commercial testing system I wrote, I built a fast interpreter which easily interfaced with the testing library because it was written in C++ and linked directly to the testing engine library. Callbacks from scripting code to testing library objects involved translating between the formats with significant overhead. I'm imagining that with LLVM, I could implement the callbacks into C++ directly so that I could make the scripting code work almost as if it had been written in C++. Likewise, if all the code was compiled to LLVM byte-code format, it seems like the LLVM optimizers could optimize across the boundaries between the scripting language and the testing engine code that was written in C++. I don't want to have to compile the testing engine every time. Ideally, I'd like to JIT compile only the scripting code. For small tests, I'd skip some optimization passes, while for large tests, I'd perform full optimizations during the link. So is this possible? Can I precompile the testing engine to a .o object file or .a library file and then link in the scripting code using the JIT? Finally, ideally, I'd like to have the scripting code implement specific methods as subclasses for a specific C++ class. So the C++ testing engine would only see C++ objects while the JIT setup code compiled scripting code that implemented some of the methods for the objects. It seems that if I used the right name mangling algorithm it would be relatively easy to set up the LLVM generation for the scripting language to look like a C++ method call which could then be linked into the testing engine. Thus the linking stage would go in two directions, calls from the scripting language into the testing engine objects to retrieve pricing information and test state information and calls from the testing engine of methods of some particular C++ objects where the code was supplied not from C++ but from the scripting language. In summary: 1) Can I link in precompiled (either .bc, .o, or .a) files as part of the JIT compilation, code-generation process? 2) Can I link in code using that the process in 1) above in such a way that I am able to create code that acts as if it was all written in C++?

    Read the article

  • Why C# doesn't implement indexed properties ?

    - by Thomas Levesque
    I know, I know... Eric Lippert's answer to this kind of question is usually something like "because it wasn't worth the cost of designing, implementing, testing and documenting it". But still, I'd like a better explanation... I was reading this blog post about new C# 4 features, and in the section about COM Interop, the following part caught my attention : By the way, this code uses one more new feature: indexed properties (take a closer look at those square brackets after Range.) But this feature is available only for COM interop; you cannot create your own indexed properties in C# 4.0. OK, but why ? I already knew and regretted that it wasn't possible to create indexed properties in C#, but this sentence made me think again about it. I can see several good reasons to implement it : the CLR supports it (for instance, PropertyInfo.GetValue has an index parameter), so it's a pity we can't take advantage of it in C# it is supported for COM interop, as shown in the article (using dynamic dispatch) it is implemented in VB.NET it is already possible to create indexers, i.e. to apply an index to the object itself, so it would probably be no big deal to extend the idea to properties, keeping the same syntax and just replacing this with a property name It would allow to write that kind of things : public class Foo { private string[] _values = new string[3]; public string Values[int index] { get { return _values[index]; } set { _values[index] = value; } } } Currently the only workaround that I know is to create an inner class (ValuesCollection for instance) that implements an indexer, and change the Values property so that it returns an instance of that inner class. This is very easy to do, but annoying... So perhaps the compiler could do it for us ! An option would be to generate an inner class that implements the indexer, and expose it through a public generic interface : // interface defined in the namespace System public interface IIndexer<TIndex, TValue> { TValue this[TIndex index] { get; set; } } public class Foo { private string[] _values = new string[3]; private class <>c__DisplayClass1 : IIndexer<int, string> { private Foo _foo; public <>c__DisplayClass1(Foo foo) { _foo = foo; } public string this[int index] { get { return _foo._values[index]; } set { _foo._values[index] = value; } } } private IIndexer<int, string> <>f__valuesIndexer; public IIndexer<int, string> Values { get { if (<>f__valuesIndexer == null) <>f__valuesIndexer = new <>c__DisplayClass1(this); return <>f__valuesIndexer; } } } But of course, in that case the property would actually return a IIndexer<int, string>, and wouldn't really be an indexed property... It would be better to generate a real CLR indexed property. What do you think ? Would you like to see this feature in C# ? If not, why ?

    Read the article

  • Correct XML serialization and deserialization of "mixed" types in .NET

    - by Stefan
    My current task involves writing a class library for processing HL7 CDA files. These HL7 CDA files are XML files with a defined XML schema, so I used xsd.exe to generate .NET classes for XML serialization and deserialization. The XML Schema contains various types which contain the mixed="true" attribute, specifying that an XML node of this type may contain normal text mixed with other XML nodes. The relevant part of the XML schema for one of these types looks like this: <xs:complexType name="StrucDoc.Paragraph" mixed="true"> <xs:sequence> <xs:element name="caption" type="StrucDoc.Caption" minOccurs="0"/> <xs:choice minOccurs="0" maxOccurs="unbounded"> <xs:element name="br" type="StrucDoc.Br"/> <xs:element name="sub" type="StrucDoc.Sub"/> <xs:element name="sup" type="StrucDoc.Sup"/> <!-- ...other possible nodes... --> </xs:choice> </xs:sequence> <xs:attribute name="ID" type="xs:ID"/> <!-- ...other attributes... --> </xs:complexType> The generated code for this type looks like this: /// <remarks/> [System.CodeDom.Compiler.GeneratedCodeAttribute("xsd", "2.0.50727.3038")] [System.SerializableAttribute()] [System.Diagnostics.DebuggerStepThroughAttribute()] [System.ComponentModel.DesignerCategoryAttribute("code")] [System.Xml.Serialization.XmlTypeAttribute(TypeName="StrucDoc.Paragraph", Namespace="urn:hl7-org:v3")] public partial class StrucDocParagraph { private StrucDocCaption captionField; private object[] itemsField; private string[] textField; private string idField; // ...fields for other attributes... /// <remarks/> public StrucDocCaption caption { get { return this.captionField; } set { this.captionField = value; } } /// <remarks/> [System.Xml.Serialization.XmlElementAttribute("br", typeof(StrucDocBr))] [System.Xml.Serialization.XmlElementAttribute("sub", typeof(StrucDocSub))] [System.Xml.Serialization.XmlElementAttribute("sup", typeof(StrucDocSup))] // ...other possible nodes... public object[] Items { get { return this.itemsField; } set { this.itemsField = value; } } /// <remarks/> [System.Xml.Serialization.XmlTextAttribute()] public string[] Text { get { return this.textField; } set { this.textField = value; } } /// <remarks/> [System.Xml.Serialization.XmlAttributeAttribute(DataType="ID")] public string ID { get { return this.idField; } set { this.idField = value; } } // ...properties for other attributes... } If I deserialize an XML element where the paragraph node looks like this: <paragraph>first line<br /><br />third line</paragraph> The result is that the item and text arrays are read like this: itemsField = new object[] { new StrucDocBr(), new StrucDocBr(), }; textField = new string[] { "first line", "third line", }; From this there is no possible way to determine the exact order of the text and the other nodes. If I serialize this again, the result looks exactly like this: <paragraph> <br /> <br />first linethird line </paragraph> The default serializer just serializes the items first and then the text. I tried implementing IXmlSerializable on the StrucDocParagraph class so that I could control the deserialization and serialization of the content, but it's rather complex since there are so many classes involved and I didn't come to a solution yet because I don't know if the effort pays off. Is there some kind of easy workaround to this problem, or is it even possible by doing custom serialization via IXmlSerializable? Or should I just use XmlDocument or XmlReader/XmlWriter to process these documents?

    Read the article

  • Servlet/JSP Flow Control: Enums, Exceptions, or Something Else?

    - by Christopher Parker
    I recently inherited an application developed with bare servlets and JSPs (i.e.: no frameworks). I've been tasked with cleaning up the error-handling workflow. Currently, each <form> in the workflow submits to a servlet, and based on the result of the form submission, the servlet does one of two things: If everything is OK, the servlet either forwards or redirects to the next page in the workflow. If there's a problem, such as an invalid username or password, the servlet forwards to a page specific to the problem condition. For example, there are pages such as AccountDisabled.jsp, AccountExpired.jsp, AuthenticationFailed.jsp, SecurityQuestionIncorrect.jsp, etc. I need to redesign this system to centralize how problem conditions are handled. So far, I've considered two possible solutions: Exceptions Create an exception class specific to my needs, such as AuthException. Inherit from this class to be more specific when necessary (e.g.: InvalidUsernameException, InvalidPasswordException, AccountDisabledException, etc.). Whenever there's a problem condition, throw an exception specific to the condition. Catch all exceptions via web.xml and route them to the appropriate page(s) with the <error-page> tag. enums Adopt an error code approach, with an enum keeping track of the error code and description. The descriptions can be read from a resource bundle in the finished product. I'm leaning more toward the enum approach, as an authentication failure isn't really an "exceptional condition" and I don't see any benefit in adding clutter to the server logs. Plus, I'd just be replacing one maintenance headache with another. Instead of separate JSPs to maintain, I'd have separate Exception classes. I'm planning on implementing "error" handling in a servlet that I'm writing specifically for this purpose. I'm also going to eliminate all of the separate error pages, instead setting an error request attribute with the error message to display to the user and forwarding back to the referrer. Each target servlet (Logon, ChangePassword, AnswerProfileQuestions, etc.) would add an error code to the request and redirect to my new servlet in the event of a problem. My new servlet would look something like this: public enum Error { INVALID_PASSWORD(5000, "You have entered an invalid password."), ACCOUNT_DISABLED(5002, "Your account has been disabled."), SESSION_EXPIRED(5003, "Your session has expired. Please log in again."), INVALID_SECURITY_QUESTION(5004, "You have answered a security question incorrectly."); private final int code; private final String description; Error(int code, String description) { this.code = code; this.description = description; } public int getCode() { return code; } public String getDescription() { return description; } }; protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { String sendTo = "UnknownError.jsp"; String message = "An unknown error has occurred."; int errorCode = Integer.parseInt((String)request.getAttribute("errorCode"), 10); Error errors[] = Error.values(); Error error = null; for (int i = 0; error == null && i < errors.length; i++) { if (errors[i].getCode() == errorCode) { error = errors[i]; } } if (error != null) { sendTo = request.getHeader("referer"); message = error.getDescription(); } request.setAttribute("error", message); request.getRequestDispatcher(sendTo).forward(request, response); } protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { doGet(request, response); } Being fairly inexperienced with Java EE (this is my first real exposure to JSPs and servlets), I'm sure there's something I'm missing, or my approach is suboptimal. Am I on the right track, or do I need to rethink my strategy?

    Read the article

  • Unresolved compilation problems -- can't use .jar files that I have created

    - by Mike
    I created a few .jar files and am trying to access them in another application - I have tried to use both Eclipse and IntelliJ and experience the same issue: java.lang.Error: Unresolved compilation problems: The import com.XXXX.XXXXXXXXX.project2 cannot be resolved The import com.XXXX.XXXXXXXXX.project2 cannot be resolved BeanFactory cannot be resolved to a type Author cannot be resolved to a type AuthorFactoryImpl cannot be resolved to a type Author cannot be resolved to a type Author cannot be resolved to a type I have been using Maven during this process and the jars compile fine. I have included them on the file path using both the Maven .pom file and directly assigning them. I also have unassigned the direct file path and left the reference in Maven and vise versa -- no difference. See below .jar file class info: file structure: Author.java BeanWithIdentityInterface Books Subject ie: Interface: package com.XXXX.training; /** * Created with IntelliJ IDEA. * User: kBPersonal * Date: 11/5/12 * Time: 3:16 PM * */ public interface BeanWithIdentityInterface <I> { I getId(); } Author.java: package com.XXXX.training; /** * Created with IntelliJ IDEA. * User: kBPersonal * Date: 10/25/12 * Time: 12:03 PM */ public class Author implements BeanWithIdentityInterface <Integer>{ private Integer id = null; private String name = null; private String picture = null; private String bio = null; public Author(Integer id, String bio, String name, String picture) { this.id = id; this.bio = bio; this.name = name; this.picture = picture; } public Author (){} @Override public Integer getId() { return id; } public void setId(Integer id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } public String getPicture() { return picture; } public void setPicture(String picture) { this.picture = picture; } public String getBio() { return bio; } public void setBio(String bio) { this.bio = bio; } @Override public String toString() { return "\n\tAuthor Id: "+this.getId() + " | Bio:"+ this.getBio()+ " | Name:"+ this.getName()+ " | Picture: "+ this.getPicture(); } } implementing Servlet: package com.acentia.training.project3.controller; import com.acentia.training.*; import com.acentia.training.project2.AuthorFactoryImpl; import com.acentia.training.project2.BeanFactory; import javax.servlet.ServletException; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import java.io.IOException; import java.io.PrintWriter; /** * Created with IntelliJ IDEA. * User: kBPersonal * Date: 11/11/12 * Time: 6:34 PM * */ public class ListAuthorServlet extends AbstractBaseServlet { private static final long serialVersionUID = -6934109551750492182L; public void doProcess(final HttpServletRequest request, final HttpServletResponse response) throws IOException { final BeanFactory<Author, Integer> authorFactory = new AuthorFactoryImpl(); Author author = null; if (authorFactory != null) { author = (Author) authorFactory.getMember(5); } I can't pull the Author class. Any help would be greatly appreciated.

    Read the article

  • Preventing multiple repeat selection of synchronized Controls ?

    - by BillW
    The working code sample here synchronizes (single) selection in a TreeView, ListView, and ComboBox via the use of lambda expressions in a dictionary where the Key in the dictionary is a Control, and the Value of each Key is an Action<int. Where I am stuck is that I am getting multiple repetitions of execution of the code that sets the selection in the various controls in a way that's unexpected : it's not recursing : there's no StackOverFlow error happening; but, I would like to figure out why the current strategy for preventing multiple selection of the same controls is not working. Perhaps the real problem here is distinguishing between a selection update triggered by the end-user and a selection update triggered by the code that synchronizes the other controls ? Note: I've been experimenting with using Delegates, and forms of Delegates like Action<T>, to insert executable code in Dictionaries : I "learn best" by posing programming "challenges" to myself, and implementing them, as well as studying, at the same time, the "golden words" of luminaries like Skeet, McDonald, Liberty, Troelsen, Sells, Richter. Note: Appended to this question/code, for "deep background," is a statement of how I used to do things in pre C#3.0 days where it seemed like I did need to use explicit measures to prevent recursion when synchronizing selection. Code : Assume a WinForms standard TreeView, ListView, ComboBox, all with the same identical set of entries (i.e., the TreeView has only root nodes; the ListView, in Details View, has one Column). private Dictionary<Control, Action<int>> ControlToAction = new Dictionary<Control, Action<int>>(); private void Form1_Load(object sender, EventArgs e) { // add the Controls to be synchronized to the Dictionary // with appropriate Action<int> lambda expressions ControlToAction.Add(treeView1, (i => { treeView1.SelectedNode = treeView1.Nodes[i]; })); ControlToAction.Add(listView1, (i => { listView1.Items[i].Selected = true; })); ControlToAction.Add(comboBox1, (i => { comboBox1.SelectedIndex = i; })); } private void synchronizeSelection(int i, Control currentControl) { foreach(Control theControl in ControlToAction.Keys) { // skip the 'current control' if (theControl == currentControl) continue; // for debugging only Console.WriteLine(theControl.Name + " synchronized"); // execute the Action<int> associated with the Control ControlToAction[theControl](i); } } private void treeView1_AfterSelect(object sender, TreeViewEventArgs e) { synchronizeSelection(e.Node.Index, treeView1); } private void listView1_SelectedIndexChanged(object sender, EventArgs e) { // weed out ListView SelectedIndexChanged firing // with SelectedIndices having a Count of #0 if (listView1.SelectedIndices.Count > 0) { synchronizeSelection(listView1.SelectedIndices[0], listView1); } } private void comboBox1_SelectedValueChanged(object sender, EventArgs e) { if (comboBox1.SelectedIndex > -1) { synchronizeSelection(comboBox1.SelectedIndex, comboBox1); } } background : pre C# 3.0 Seems like, back in pre C# 3.0 days, I was always using a boolean flag to prevent recursion when multiple controls were updated. For example, I'd typically have code like this for synchronizing a TreeView and ListView : assuming each Item in the ListView was synchronized with a root-level node of the TreeView via a common index : // assume ListView is in 'Details View,' has a single column, // MultiSelect = false // FullRowSelect = true // HideSelection = false; // assume TreeView // HideSelection = false // FullRowSelect = true // form scoped variable private bool dontRecurse = false; private void treeView1_AfterSelect(object sender, TreeViewEventArgs e) { if(dontRecurse) return; dontRecurse = true; listView1.Items[e.Node.Index].Selected = true; dontRecurse = false; } private void listView1_SelectedIndexChanged(object sender, EventArgs e) { if(dontRecurse) return // weed out ListView SelectedIndexChanged firing // with SelectedIndices having a Count of #0 if (listView1.SelectedIndices.Count > 0) { dontRecurse = true; treeView1.SelectedNode = treeView1.Nodes[listView1.SelectedIndices[0]]; dontRecurse = false; } } Then it seems, somewhere around FrameWork 3~3.5, I could get rid of the code to suppress recursion, and there was was no recursion (at least not when synchronizing a TreeView and a ListView). By that time it had become a "habit" to use a boolean flag to prevent recursion, and that may have had to do with using a certain third party control.

    Read the article

  • Improving the performance of XSL

    - by Rachel
    In the below XSL for the variable "insert-data", I have an input param with the structure, <insert-data> <data compareIndex="4" nodeName="d1e1"> <a/> </data> <data compareIndex="5" nodeName="d1e1"> <b/> </data> <data compareIndex="7" nodeName="d1e2"> <a/> </data> <data compareIndex="9" nodeName="d1e2"> <b/> </data> </insert-data> where "nodeName" is the id of a node and "compareIndex" is the position of the text content relative to the node having id "$nodeName". I am using the below XSL to select all the text nodes(generate-id) that satisfy the above condition and construct a data xml. The below implementation works perfectly but the time taken for the execution is in min. Is there a better way of implementing or is there any in-efficient operation being used. From my observation the code where the preceding text length is calculated consumes the major time. Please share your thoughts to improve the performance of the XSL. I am using Java SAXON XSL transformer. <xsl:variable name="insert-data" as="element()*"> <xsl:for-each select="$insert-file/insert-data/data"> <xsl:sort select="xsd:integer(@index)"/> <xsl:variable name="compareIndex" select="xsd:integer(@compareIndex)" /> <xsl:variable name="nodeName" select="@nodeName" /> <xsl:variable name="nodeContent" as="node()"> <xsl:copy-of select="node()"/> </xsl:variable> <xsl:for-each select="$main-root/*//text()[ancestor::*[@id = $nodeName]]"> <xsl:variable name="preTextLength" as="xsd:integer" select="sum((preceding::text())[. ancestor::*[@id = $nodeName]]/string-length(.))" /> <xsl:variable name="currentTextLength" as="xsd:integer" select="string-length(.)" /> <xsl:variable name="sum" select="$preTextLength + $currentTextLength" as="xsd:integer"></xsl:variable> <xsl:variable name="split-index" select="$compareIndex - $preTextLength" as="xsd:integer"></xsl:variable> <xsl:if test="($sum ge $compareIndex) and ($compareIndex gt $preTextLength)"> <data split-index="{$split-index}" text-id="{generate-id(.)}"> <xsl:copy-of select="$nodeContent"/> </data> </xsl:if> </xsl:for-each> </xsl:for-each> </xsl:variable>

    Read the article

  • Problem with socket communication between C# and Flex

    - by Chris Lee
    Hi all, I am implementing a simulated b/s stock data system. I am using flex and c# for client and server sides. I found flash has a security policy and I handled the policy-file-request in my server code. But seems it doesn't work, because the code jumped out at "socket.Receive(b)" after connection. I've tried sending message on client in the connection handler, in that case the server can receive correct message. But the auto-generated "policy-file-request" can never be received, and the client can get no data sending from server. Here I put my code snippet. my ActionScript code: public class StockClient extends Sprite { private var hostName:String = "192.168.84.103"; private var port:uint = 55555; private var socket:XMLSocket; public function StockClient() { socket = new XMLSocket(); configureListeners(socket); socket.connect(hostName, port); } public function send(data:Object) : void{ socket.send(data); } private function configureListeners(dispatcher:IEventDispatcher):void { dispatcher.addEventListener(Event.CLOSE, closeHandler); dispatcher.addEventListener(Event.CONNECT, connectHandler); dispatcher.addEventListener(IOErrorEvent.IO_ERROR, ioErrorHandler); dispatcher.addEventListener(ProgressEvent.PROGRESS, progressHandler); dispatcher.addEventListener(SecurityErrorEvent.SECURITY_ERROR, securityErrorHandler); dispatcher.addEventListener(ProgressEvent.SOCKET_DATA, dataHandler); } private function closeHandler(event:Event):void { trace("closeHandler: " + event); } private function connectHandler(event:Event):void { trace("connectHandler: " + event); //following testing message can be received, but client can't invoke data handler //send("<policy-file-request/>"); } private function dataHandler(event:ProgressEvent):void { //never fired trace("dataHandler: " + event); } private function ioErrorHandler(event:IOErrorEvent):void { trace("ioErrorHandler: " + event); } private function progressHandler(event:ProgressEvent):void { trace("progressHandler loaded:" + event.bytesLoaded + " total: " + event.bytesTotal); } private function securityErrorHandler(event:SecurityErrorEvent):void { trace("securityErrorHandler: " + event); } } my C# code: const int PORT_NUMBER = 55555; const String BEGIN_REQUEST = "begin"; const String END_REQUEST = "end"; const String POLICY_REQUEST = "<policy-file-request/>\u0000"; const String POLICY_FILE = "<?xml version=\"1.0\"?>\n" + "<!DOCTYPE cross-domain-policy SYSTEM \"http://www.adobe.com/xml/dtds/cross-domain-policy.dtd\">\n" + "<cross-domain-policy> \n" + " <allow-access-from domain=\"*\" to-ports=\"55555\"/> \n" + "</cross-domain-policy>\u0000"; ................ private void startListening() { provider = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); provider.Bind(new IPEndPoint(IPAddress.Parse("192.168.84.103"), PORT_NUMBER)); provider.Listen(10); isListened = true; while (isListened) { Socket socket = provider.Accept(); Console.WriteLine("connect!"); byte[] b = new byte[1024]; int receiveLength = 0; try { // code jump out at this statement receiveLength = socket.Receive(b); } catch (Exception e) { Debug.WriteLine(e.ToString()); } String request = System.Text.Encoding.UTF8.GetString(b, 0, receiveLength); Console.WriteLine("request:"+request); if (request == POLICY_REQUEST) { socket.Send(Encoding.UTF8.GetBytes(POLICY_FILE)); Console.WriteLine("response:" + POLICY_FILE); } else if (request == END_REQUEST) { Dispose(socket); } else { StartSocket(socket); break; } } } Sorry for the long code, please someone help with it, thanks a million

    Read the article

  • Optimal storage of data structure for fast lookup and persistence

    - by Mikael Svenson
    Scenario I have the following methods: public void AddItemSecurity(int itemId, int[] userIds) public int[] GetValidItemIds(int userId) Initially I'm thinking storage on the form: itemId -> userId, userId, userId and userId -> itemId, itemId, itemId AddItemSecurity is based on how I get data from a third party API, GetValidItemIds is how I want to use it at runtime. There are potentially 2000 users and 10 million items. Item id's are on the form: 2007123456, 2010001234 (10 digits where first four represent the year). AddItemSecurity does not have to perform super fast, but GetValidIds needs to be subsecond. Also, if there is an update on an existing itemId I need to remove that itemId for users no longer in the list. I'm trying to think about how I should store this in an optimal fashion. Preferably on disk (with caching), but I want the code maintainable and clean. If the item id's had started at 0, I thought about creating a byte array the length of MaxItemId / 8 for each user, and set a true/false bit if the item was present or not. That would limit the array length to little over 1mb per user and give fast lookups as well as an easy way to update the list per user. By persisting this as Memory Mapped Files with the .Net 4 framework I think I would get decent caching as well (if the machine has enough RAM) without implementing caching logic myself. Parsing the id, stripping out the year, and store an array per year could be a solution. The ItemId - UserId[] list can be serialized directly to disk and read/write with a normal FileStream in order to persist the list and diff it when there are changes. Each time a new user is added all the lists have to updated as well, but this can be done nightly. Question Should I continue to try out this approach, or are there other paths which should be explored as well? I'm thinking SQL server will not perform fast enough, and it would give an overhead (at least if it's hosted on a different server), but my assumptions might be wrong. Any thought or insights on the matter is appreciated. And I want to try to solve it without adding too much hardware :) [Update 2010-03-31] I have now tested with SQL server 2008 under the following conditions. Table with two columns (userid,itemid) both are Int Clustered index on the two columns Added ~800.000 items for 180 users - Total of 144 million rows Allocated 4gb ram for SQL server Dual Core 2.66ghz laptop SSD disk Use a SqlDataReader to read all itemid's into a List Loop over all users If I run one thread it averages on 0.2 seconds. When I add a second thread it goes up to 0.4 seconds, which is still ok. From there on the results are decreasing. Adding a third thread brings alot of the queries up to 2 seonds. A forth thread, up to 4 seconds, a fifth spikes some of the queries up to 50 seconds. The CPU is roofing while this is going on, even on one thread. My test app takes some due to the speedy loop, and sql the rest. Which leads me to the conclusion that it won't scale very well. At least not on my tested hardware. Are there ways to optimize the database, say storing an array of int's per user instead of one record per item. But this makes it harder to remove items.

    Read the article

  • Creating a Serializable mock with Mockito error

    - by KwintenP
    I'm trying to create a mock object with Mockito that can be serialized. The object is an interface implementation. When this method is called, I receive an object that I want to pass to another object, hence using the doAnswer(...)-method. This is my code. InterfaceClass obj = mock(InterfaceClass.class, withSettings().serializable()); doAnswer(new Answer<Object>() { public Object answer(InvocationOnMock invocation) throws Throwable { Object[] args = invocation.getArguments(); //Here I do something with the arguments } }).when(obj).someMethod( any(someObject.class)); ByteArrayOutputStream bos = new ByteArrayOutputStream(); ObjectOutput out = null; try { out = new ObjectOutputStream(bos); out.writeObject(obj); byte[] yourBytes = bos.toByteArray(); } finally { out.close(); bos.close(); } As far as I can tell this should be correct (I'm fairly new to Mockito). But when Serializing my object I get this error: java.io.NotSerializableException: com.trust1t.ocs.signcore.test.InvalidInputTestCase$1 at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1165) at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:329) at java.util.concurrent.ConcurrentLinkedQueue.writeObject(ConcurrentLinkedQueue.java:644) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:950) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1482) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1413) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1159) at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1535) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1496) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1413) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1159) at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:329) at java.util.LinkedList.writeObject(LinkedList.java:943) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:950) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1482) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1413) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1159) at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1535) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1496) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1413) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1159) at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1535) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1496) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1413) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1159) at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1535) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1496) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1413) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1159) at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1535) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1496) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1413) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1159) at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:329) at com.trust1t.ocs.signcore.test.InvalidInputTestCase.certificateValidationTest(InvalidInputTestCase.java:117) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) The invalidInputTestCase class is the class containing the test where I'm using this code. It looks as if the mock object references this TestCase somewhere (can't find it though). Am I not correctly implementing this or better ideas to mock?

    Read the article

  • SDL side-scroller scrolls inconsistantly

    - by SDLFunTimes
    So I'm working on an upgrade from my previous project (that I posted here for code review) this time implementing a repeating background (like what is used on cartoons) so that SDL doesn't have to load really big images for a level. There's a strange inconsistency in the program, however: the first time the user scrolls all the way to the right 2 less panels are shown than is specified. Going backwards (left) the correct number of panels is shown (that is the panels repeat the number of times specified in the code). After that it appears that going right again (once all the way at the left) the correct number of panels is shown and same going backwards. Here's some selected code and here's a .zip of all my code constructor: Game::Game(SDL_Event* event, SDL_Surface* scr, int level_w, int w, int h, int bpp) { this->event = event; this->bpp = bpp; level_width = level_w; screen = scr; w_width = w; w_height = h; //load images and set rects background = format_surface("background.jpg"); person = format_surface("person.png"); background_rect_left = background->clip_rect; background_rect_right = background->clip_rect; current_background_piece = 1; //we are displaying the first clip rect_in_view = &background_rect_right; other_rect = &background_rect_left; person_rect = person->clip_rect; background_rect_left.x = 0; background_rect_left.y = 0; background_rect_right.x = background->w; background_rect_right.y = 0; person_rect.y = background_rect_left.h - person_rect.h; person_rect.x = 0; } and here's the move method which is probably causing all the trouble: void Game::move(SDLKey direction) { if(direction == SDLK_RIGHT) { if(move_screen(direction)) { if(!background_reached_right()) { //move background right background_rect_left.x += movement_increment; background_rect_right.x += movement_increment; if(rect_in_view->x >= 0) { //move the other rect in to fill the empty space SDL_Rect* temp; other_rect->x = -w_width + rect_in_view->x; temp = rect_in_view; rect_in_view = other_rect; other_rect = temp; current_background_piece++; std::cout << current_background_piece << std::endl; } if(background_overshoots_right()) { //sees if this next blit is past the surface //this is used only for re-aligning the rects when //the end of the screen is reached background_rect_left.x = 0; background_rect_right.x = w_width; } } } else { //move the person instead person_rect.x += movement_increment; if(get_person_right_side() > w_width) { //person went too far right person_rect.x = w_width - person_rect.w; } } } else if(direction == SDLK_LEFT) { if(move_screen(direction)) { if(!background_reached_left()) { //moves background left background_rect_left.x -= movement_increment; background_rect_right.x -= movement_increment; if(rect_in_view->x <= -w_width) { //swap the rect in view SDL_Rect* temp; rect_in_view->x = w_width; temp = rect_in_view; rect_in_view = other_rect; other_rect = temp; current_background_piece--; std::cout << current_background_piece << std::endl; } if(background_overshoots_left()) { background_rect_left.x = 0; background_rect_right.x = w_width; } } } else { //move the person instead person_rect.x -= movement_increment; if(person_rect.x < 0) { //person went too far left person_rect.x = 0; } } } } without the rest of the code this doesn't make too much sense. Since there is too much of it I'll upload it here for testing. Anyway does anyone know how I could fix this inconsistency?

    Read the article

  • highlight navigation PHP

    - by Kira
    I've launched a website a while back and successfully used Javascript + CSS to highlight the current page on the navigation. However, it is not working in Safari and it does not validate well, when using Javascript, so I decided to have PHP assign the CSS id to the HTML elements. So far, it works fine, compared to the other times where there was two of each link displayed, when it was attempted in PHP. My problem is that all links look normal and the CSS property is not applied. I have a feeling that it has to do with my PHP code, but I'm not certain. The site address is here As for the PHP code, here it is: <?php echo('<li><span class="bold">Main</span>'); echo('<ul>'); if ($page=="home") { echo('<li><a id="current" href="index.shtml">Home</a></li>'); } else { echo('<li><a href="index.shtml">Home</a></li>'); } if ($page=="faq") { echo('<li><a id="current" href="faq.shtml">FAQ</a></li>'); } else { echo('<li><a href="faq.shtml">FAQ</a></li>'); } if ($page=="about") { echo('<li><a id="current" href="about.shtml">About Bryce</a></li>'); } else { echo('<li><a href="about.shtml">About Bryce</a></li>'); } echo('<li><a href="contact.php">Contact Bryce</a></li>'); if ($page=="sign guestbook") { echo('<li><a id="current" href="sign.shtml">Sign Guestbook</a></li>'); } else { echo('<li><a href="sign.shtml">Sign Guestbook</a></li>'); } if ($page=="view guestbook") { echo('<li><a id="current" href="view.shtml">View Guestbook</a></li>'); } else { echo('<li><a href="view.shtml">View Guestbook</a></li>'); } echo('</ul>'); echo('</li>'); echo('<li><span class="bold">Info</span>'); echo('<ul>'); if ($page=="projects") { echo('<li><a id="current" href="projects.shtml">Projects</a></li>'); } else { echo('<li><a href="projects.shtml">Projects</a></li>'); } if ($page=="books") { echo('<li><a id="current" href="books.shtml">Books</a></li>'); } else { echo('<li><a href="books.shtml">Books</a></li>'); } echo('</ul>'); echo('</li>'); echo('<li><span class="bold">Misc.</span>'); echo('<ul>'); if ($page=="cover designs") { echo('<li><a id="current" href="coverdesigns.shtml">Cover Designs</a></li>'); } else { echo('<li><a href="coverdesigns.shtml">Cover Designs</a></li>'); } echo('<li><a target="_blank" href="http://www.lulu.com/brycecampbellsbooks">Lulu Store</a></li>'); echo('<li><a href="rss/">RSS</a></li>'); echo('</ul>'); echo('</li>'); ?> In order to give you guys an idea of what the highlighting effect should look like, here is the CSS that is supposed to be applied to the current page: #current { font-style: italic; text-decoration: none; color: #000000; } When looking up what I was doing wrong, it told me that I was implementing it right, but it does not seem that the PHP is getting the values.

    Read the article

  • Cross-platform distributed fault-tolerant (disconnected operation/local cache) filesystem

    - by Adrian Frühwirth
    We are facing a design "challenge" where we are required to set up a storage solution with the following properties: What we need HA a scalable storage backend offline/disconnected operation on the client to account for network outages cross-platform access client-side access from certainly Windows (probably XP upwards), possibly Linux backend integrates with AD/LDAP (permission management (user/group management, ...)) should work reasonably well over slow WAN-links Another problem is that we don't really know all possible use cases here, if people need to be able to have concurrent access to shared files or if they will only be accessing their own files, so a possible solution needs to account for concurrent access and how conflict management would look in this case from a user's point of view. This two years old blog posts sums up the impression that I have been getting during the last couple of days of research, that there are lots of current übercool projects implementing (non-Windows) clustered petabyte-capable blob-storage solutions but that there is none that supports disconnected operation nicely and natively, but I am hoping that we have missed an obvious solution. What we have tried OpenAFS We figured that we want a distributed network filesystem with a local cache and tested OpenAFS (which, as the only currently "stable" DFS supporting disconnected operation, seemed the way to go) for a week but there are several problems with it: it's a real pain to set up there are no official RHEL/CentOS packages the package of the current stable version 1.6.5.1 from elrepo randomly kernel panics on fresh installs, this is an absolute no-go Windows support (including the required Kerberos packages) is mystical. The current client for the 1.6 branch does not run on Windows 8, the current client for the 1.7 does but it just randomly crashes. After that experience we didn't even bother testing on XP and Windows 7. Suffice to say, we couldn't get it working and the whole setup has been so unstable and complicated to setup that it's just not an option for production. Samba + Unison Since OpenAFS was a complete disaster and no other DFS seems to support disconnected operation we went for a simpler idea that would sync files against a Samba server using Unison. This has the following advantages: Samba integrates with ADs; it's a pain but can be done. Samba solves the problem of remotely accessing the storage from Windows but introduces another SPOF and does not address the actual storage problem. We could probably stick any clustered FS underneath Samba, but that means we need a HA Samba setup on top of that to maintain HA which probably adds a lot of additional complexity. I vaguely remember trying to implement redundancy with Samba before and I could not silently failover between servers. Even when online, you are working with local files which will result in more conflicts than would be necessary if a local cache were only touched when disconnected It's not automatic. We cannot expect users to manually sync their files using the (functional, but not-so-pretty) GTK GUI on a regular basis. I attempted to semi-automate the process using the Windows task scheduler, but you cannot really do it in a satisfactory way. On top of that, the way Unison works makes syncing against Samba a costly operation, so I am afraid that it just doesn't scale very well or even at all. Samba + "Offline Files" After that we became a little desparate and gave Windows "offline files" a chance. We figured that having something that is inbuilt into the OS would reduce administrative efforts, helps blaming someone else when it's not working properly and should just work since people have been using this for years. Right? Wrong. We really wanted it to work, but it just doesn't. 30 minutes of copying files around and unplugging network cables/disabling network interfaces left us with (silent! there is only a tiny notification in Windows explorer in the statusbar, which doesn't even open Sync Center if you click on it!) undeletable files on the server (!) and conflicts that should not even be conflicts. In the end, we had one successful sync of a tiny text file, everything else just exploded horribly. Beyond that, there are other problems: Microsoft admits that "offline files" in Windows XP cannot cope with "large files" and therefore does not cache/sync them at all which would mean those files become unavailable if the connection drop In Windows 7 the feature is only available in the Professional/Ultimate/Enterprise editions. Summary Unless there is another fault-tolerant DFS that supports Windows natively I assume that stacking a HA Samba cluster on top of something like GlusterFS/Lustre/whatnot is the only option, but I hope that I am wrong here. How do other companies allow fault-tolerant network access to redundant storage in a heterogeneous environment with Windows?

    Read the article

  • Android AlertDialog wait for result in calling activity

    - by insanesam
    I am trying to use an AlertDialog in my app to select the quantity of an item. The problem is that the activity that calls the AlertDialog doesn't wait for it to update the item before it adds it to the SQLite Database and change intents. At the moment, the QuantitySelector (AlertDialog) appears, then disappears straight away and changes the MealActivity class (which is just a ListView that reads from the database) through the intent change with an update to the database with quantity 0. I need the Activity to wait for the AlertDialog to close before it updates the database. What would be the correct way of implementing this? Here is some code for you: QuantitySelector (which runs the alertdialog): public class QuantitySelector{ protected static final int RESULT_OK = 0; private Context _context; private DatabaseHandler db; private HashMap<String, Double> measures; private Item item; private View v; private EditText quan; private NumberPicker pick; private int value; private Quantity quantity; /** * Function calls the quantity selector AlertDialog * @param _c: The application context * @param item: The item to be added to consumption * @return The quantity that is consumed */ public void select(Context _c, Item item, Quantity quantity){ this._context = _c; this.item = item; this.quantity = quantity; db = new DatabaseHandler(_context); //Get the measures to display createData(); //Set up the custom view LayoutInflater inflater = LayoutInflater.from(_context); v = inflater.inflate(R.layout.quantity_selector, null); //Set up the input fields quan = (EditText) v.findViewById(R.id.quantityNumber); pick = (NumberPicker) v.findViewById(R.id.numberPicker1); //Set up the custom measures into pick pick.setMaxValue(measures.size()-1); pick.setDisplayedValues(measures.keySet().toArray(new String[0])); //Start the alert dialog runDialog(); } public void createData(){ measures = new HashMap<String, Double>(); //Get the measurements from the database if(item!=null){ measures.putAll(db.getMeasures(item)); } //Add grams as the default measurement if(!measures.keySet().contains("grams")){ //Add grams as a standard measure measures.put("grams", 1.0); } } public void runDialog(){ AlertDialog dialog = new AlertDialog.Builder(_context).setTitle("Select Quantity") .setView(v) .setPositiveButton("OK", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int whichButton) { //Change the consumption to the new quantity if(!quan.getText().toString().matches("")){ value = Integer.parseInt(quan.getText().toString()); //Check if conversion from other units is needed String s[] = pick.getDisplayedValues(); String a = s[pick.getValue()]; //Convert the chosen measure back to grams if(!a.equals("grams")){ for(String m : measures.keySet()){ if(m==a){ value = (int) (value * measures.get(m)); } } } } quantity.setQuantity(value); dialog.dismiss(); } }) .setNegativeButton("Cancel", null).create(); dialog.show(); } } The method from favouritesAdapter (which calls the alertdialog): add.setOnClickListener(new OnClickListener(){ public void onClick(View arg0) { QuantitySelector q = new QuantitySelector(); Quantity quan = new Quantity(); q.select(_context, db.getItem(p.getID()), quan); db.addConsumption(p.getID(), p.getFavouriteShortName(), quan.getQuantity(), "FAVOURITE"); Intent intent = new Intent(_context,MealActivity.class); _context.startActivity(intent); } }); All help is appreciated :)

    Read the article

  • Domain Validation in a CQRS architecture

    - by Jupaol
    Basically I want to know if there is a better way to validate my domain entities. This is how I am planning to do it but I would like your opinion The first approach I considered was: class Customer : EntityBase<Customer> { public void ChangeEmail(string email) { if(string.IsNullOrWhitespace(email)) throw new DomainException(“...”); if(!email.IsEmail()) throw new DomainException(); if(email.Contains(“@mailinator.com”)) throw new DomainException(); } } I actually do not like this validation because even when I am encapsulating the validation logic in the correct entity, this is violating the Open/Close principle (Open for extension but Close for modification) and I have found that violating this principle, code maintenance becomes a real pain when the application grows up in complexity. Why? Because domain rules change more often than we would like to admit, and if the rules are hidden and embedded in an entity like this, they are hard to test, hard to read, hard to maintain but the real reason why I do not like this approach is: if the validation rules change, I have to come and edit my domain entity. This has been a really simple example but in RL the validation could be more complex So following the philosophy of Udi Dahan, making roles explicit, and the recommendation from Eric Evans in the blue book, the next try was to implement the specification pattern, something like this class EmailDomainIsAllowedSpecification : IDomainSpecification<Customer> { private INotAllowedEmailDomainsResolver invalidEmailDomainsResolver; public bool IsSatisfiedBy(Customer customer) { return !this.invalidEmailDomainsResolver.GetInvalidEmailDomains().Contains(customer.Email); } } But then I realize that in order to follow this approach I had to mutate my entities first in order to pass the value being valdiated, in this case the email, but mutating them would cause my domain events being fired which I wouldn’t like to happen until the new email is valid So after considering these approaches, I came out with this one, since I am going to implement a CQRS architecture: class EmailDomainIsAllowedValidator : IDomainInvariantValidator<Customer, ChangeEmailCommand> { public void IsValid(Customer entity, ChangeEmailCommand command) { if(!command.Email.HasValidDomain()) throw new DomainException(“...”); } } Well that’s the main idea, the entity is passed to the validator in case we need some value from the entity to perform the validation, the command contains the data coming from the user and since the validators are considered injectable objects they could have external dependencies injected if the validation requires it. Now the dilemma, I am happy with a design like this because my validation is encapsulated in individual objects which brings many advantages: easy unit test, easy to maintain, domain invariants are explicitly expressed using the Ubiquitous Language, easy to extend, validation logic is centralized and validators can be used together to enforce complex domain rules. And even when I know I am placing the validation of my entities outside of them (You could argue a code smell - Anemic Domain) but I think the trade-off is acceptable But there is one thing that I have not figured out how to implement it in a clean way. How should I use this components... Since they will be injected, they won’t fit naturally inside my domain entities, so basically I see two options: Pass the validators to each method of my entity Validate my objects externally (from the command handler) I am not happy with the option 1 so I would explain how I would do it with the option 2 class ChangeEmailCommandHandler : ICommandHandler<ChangeEmailCommand> { public void Execute(ChangeEmailCommand command) { private IEnumerable<IDomainInvariantValidator> validators; // here I would get the validators required for this command injected, and in here I would validate them, something like this using (var t = this.unitOfWork.BeginTransaction()) { var customer = this.unitOfWork.Get<Customer>(command.CustomerId); this.validators.ForEach(x =. x.IsValid(customer, command)); // here I know the command is valid // the call to ChangeEmail will fire domain events as needed customer.ChangeEmail(command.Email); t.Commit(); } } } Well this is it. Can you give me your thoughts about this or share your experiences with Domain entities validation EDIT I think it is not clear from my question, but the real problem is: Hiding the domain rules has serious implications in the future maintainability of the application, and also domain rules change often during the life-cycle of the app. Hence implementing them with this in mind would let us extend them easily. Now imagine in the future a rules engine is implemented, if the rules are encapsulated outside of the domain entities, this change would be easier to implement

    Read the article

  • ASP.NET how to implement IServiceLayer

    - by rockinthesixstring
    I'm trying to follow the tutorial found here to implement a service layer in my MVC application. What I can't figure out is how to wire it all up. here's what I have so far. IUserRepository.vb Namespace Data Public Interface IUserRepository Sub AddUser(ByVal openid As String) Sub UpdateUser(ByVal id As Integer, ByVal about As String, ByVal birthdate As DateTime, ByVal openid As String, ByVal regionid As Integer, ByVal username As String, ByVal website As String) Sub UpdateUserReputation(ByVal id As Integer, ByVal AmountOfReputation As Integer) Sub DeleteUser(ByVal id As Integer) Function GetAllUsers() As IList(Of User) Function GetUserByID(ByVal id As Integer) As User Function GetUserByOpenID(ByVal openid As String) As User End Interface End Namespace UserRepository.vb Namespace Data Public Class UserRepository : Implements IUserRepository Private dc As DataDataContext Public Sub New() dc = New DataDataContext End Sub #Region "IUserRepository Members" Public Sub AddUser(ByVal openid As String) Implements IUserRepository.AddUser Dim user = New User user.LastSeen = DateTime.Now user.MemberSince = DateTime.Now user.OpenID = openid user.Reputation = 0 user.UserName = String.Empty dc.Users.InsertOnSubmit(user) dc.SubmitChanges() End Sub Public Sub UpdateUser(ByVal id As Integer, ByVal about As String, ByVal birthdate As Date, ByVal openid As String, ByVal regionid As Integer, ByVal username As String, ByVal website As String) Implements IUserRepository.UpdateUser Dim user = (From u In dc.Users Where u.ID = id Select u).Single user.About = about user.BirthDate = birthdate user.LastSeen = DateTime.Now user.OpenID = openid user.RegionID = regionid user.UserName = username user.WebSite = website dc.SubmitChanges() End Sub Public Sub UpdateUserReputation(ByVal id As Integer, ByVal AmountOfReputation As Integer) Implements IUserRepository.UpdateUserReputation Dim user = (From u In dc.Users Where u.ID = id Select u).FirstOrDefault ''# Simply take the current reputation from the select statement ''# and add the proper "AmountOfReputation" user.Reputation = user.Reputation + AmountOfReputation dc.SubmitChanges() End Sub Public Sub DeleteUser(ByVal id As Integer) Implements IUserRepository.DeleteUser Dim user = (From u In dc.Users Where u.ID = id Select u).FirstOrDefault dc.Users.DeleteOnSubmit(user) dc.SubmitChanges() End Sub Public Function GetAllUsers() As System.Collections.Generic.IList(Of User) Implements IUserRepository.GetAllUsers Dim users = From u In dc.Users Select u Return users.ToList End Function Public Function GetUserByID(ByVal id As Integer) As User Implements IUserRepository.GetUserByID Dim user = (From u In dc.Users Where u.ID = id Select u).FirstOrDefault Return user End Function Public Function GetUserByOpenID(ByVal openid As String) As User Implements IUserRepository.GetUserByOpenID Dim user = (From u In dc.Users Where u.OpenID = openid Select u).FirstOrDefault Return user End Function #End Region End Class End Namespace IUserService.vb Namespace Data Interface IUserService End Interface End Namespace UserService.vb Namespace Data Public Class UserService : Implements IUserService Private _ValidationDictionary As IValidationDictionary Private _repository As IUserRepository Public Sub New(ByVal validationDictionary As IValidationDictionary, ByVal repository As IUserRepository) _ValidationDictionary = validationDictionary _repository = repository End Sub Protected Function ValidateUser(ByVal UserToValidate As User) As Boolean Dim isValid As Boolean = True If UserToValidate.OpenID.Trim().Length = 0 Then _ValidationDictionary.AddError("OpenID", "OpenID is Required") isValid = False End If If UserToValidate.MemberSince = Nothing Then _ValidationDictionary.AddError("MemberSince", "MemberSince is Required") isValid = False End If If UserToValidate.LastSeen = Nothing Then _ValidationDictionary.AddError("LastSeen", "LastSeen is Required") isValid = False End If If UserToValidate.Reputation = Nothing Then _ValidationDictionary.AddError("Reputation", "Reputation is Required") isValid = False End If Return isValid End Function End Class End Namespace I have also wired up the IValidationDictionary.vb and the ModelStateWrapper.vb as described in the article above. What I'm having a problem with is actually implementing it in my controller. My controller looks something like this. Public Class UsersController : Inherits BaseController Private UserService As Data.IUserService Public Sub New() UserService = New Data.UserService(New Data.ModelStateWrapper(Me.ModelState), New Data.UserRepository) End Sub Public Sub New(ByVal service As Data.IUserService) UserService = service End Sub .... End Class however on the line that says Public Sub New(ByVal service As Data.IUserService) I'm getting an error 'service' cannot expose type 'Data.IUserService' outside the project through class 'UsersController' So my question is TWO PARTS How can I properly implement a Service Layer in my application using the concepts from that article? Should there be any content within my IUserService.vb?

    Read the article

  • DoubleBuffering in Java

    - by DDP
    Hello there, I'm having some trouble implementing DoubleBuffer into my program. Before you faint from the wall of text, you should know that a lot of it is there just in case you need to know. The actual place where I think I'm having problems is in one method. I've recently looked up a tutorial on the gpwiki about double buffering, and decided to try and implement the code they had into the code I have that I'm trying to implement doublebuffer in. I get the following error: "java.lang.IllegalStateException: Component must have a valid peer". I don't know if it makes any difference if you know it or not, but the following is the code with the main method. This is just a Frame that displays the ChronosDisplay class inside it. I omitted irrelevant code with "..." public class CDM extends JFrame { public CDM(String str) { super("CD:M - "+str); try { ... ChronosDisplay theGame = new ChronosDisplay(str); ((Component)theGame).setFocusable(true); add(theGame); } catch(Exception e) { System.out.println("CDM ERROR: " +e); } } public static void main( String args[] ) { CDM run = new CDM("DP_Mini"); } } Here is the code where I think the problem resides (I think the problem is in the paint() method). This class is displayed in the CDM class public class ChronosDisplay extends Canvas implements Runnable { String mapName; public ChronosDisplay (String str) { mapName = str; new Thread(this).start(); setVisible(true); createBufferStrategy(2); } public void paint( Graphics window ) { BufferStrategy b = getBufferStrategy(); Graphics g = null; window.setColor(Color.white); try { g = b.getDrawGraphics(); paintMap(g); paintUnits(g); paintBullets(g); } finally { g.dispose(); } b.show(); Toolkit.getDefaultToolkit().sync(); } public void paintMap( Graphics window ) { TowerMap m = new TowerMap(); try { m = new TowerMap(mapName); for(int x=0; x<m.getRows()*50; x+=50) { for(int y = 0; y<m.getCols()*50; y+=50) { int tileType = m.getLocation(x/50,y/50); Image img; if(tileType == 0) { Tile0 t = new Tile0(x,y); t.draw(window); } ...// More similar if statements for other integers } catch(Exception e) ... } ...// Additional methods not shown here public void run() { try { while(true) { Thread.currentThread().sleep(20); repaint(); } } catch(Exception e) ... } } If you're curious (I doubt it matters), the draw() method in the Tile0 class is: public void draw( Graphics window ) { window.drawImage(img,getX(),getY(),50,50,null); } Any pointers, tips, or solutions are greatly appreciated. Thanks for your time! :D

    Read the article

  • List<T> and IEnumerable difference

    - by Jonas Elfström
    While implementing this generic merge sort, as a kind of Code Kata, I stumbled on a difference between IEnumerable and List that I need help to figure out. Here's the MergeSort public class MergeSort<T> { public IEnumerable<T> Sort(IEnumerable<T> arr) { if (arr.Count() <= 1) return arr; int middle = arr.Count() / 2; var left = arr.Take(middle).ToList(); var right = arr.Skip(middle).ToList(); return Merge(Sort(left), Sort(right)); } private static IEnumerable<T> Merge(IEnumerable<T> left, IEnumerable<T> right) { var arrSorted = new List<T>(); while (left.Count() > 0 && right.Count() > 0) { if (Comparer<T>.Default.Compare(left.First(), right.First()) < 0) { arrSorted.Add(left.First()); left=left.Skip(1); } else { arrSorted.Add(right.First()); right=right.Skip(1); } } return arrSorted.Concat(left).Concat(right); } } If I remove the .ToList() on the left and right variables it fails to sort correctly. Do you see why? Example var ints = new List<int> { 5, 8, 2, 1, 7 }; var mergeSortInt = new MergeSort<int>(); var sortedInts = mergeSortInt.Sort(ints); With .ToList() [0]: 1 [1]: 2 [2]: 5 [3]: 7 [4]: 8 Without .ToList() [0]: 1 [1]: 2 [2]: 5 [3]: 7 [4]: 2 Edit It was my stupid test that got me. I tested it like this: var sortedInts = mergeSortInt.Sort(ints); ints.Sort(); if (Enumerable.SequenceEqual(ints, sortedInts)) Console.WriteLine("ints sorts ok"); just changing the first row to var sortedInts = mergeSortInt.Sort(ints).ToList(); removes the problem (and the lazy evaluation). EDIT 2010-12-29 I thought I would figure out just how the lazy evaluation messes things up here but I just don't get it. Remove the .ToList() in the Sort method above like this var left = arr.Take(middle); var right = arr.Skip(middle); then try this var ints = new List<int> { 5, 8, 2 }; var mergeSortInt = new MergeSort<int>(); var sortedInts = mergeSortInt.Sort(ints); ints.Sort(); if (Enumerable.SequenceEqual(ints, sortedInts)) Console.WriteLine("ints sorts ok"); When debugging You can see that before ints.Sort() a sortedInts.ToList() returns [0]: 2 [1]: 5 [2]: 8 but after ints.Sort() it returns [0]: 2 [1]: 5 [2]: 5 What is really happening here?

    Read the article

  • Where is the mistake ?

    - by mr.bio
    Hi ... i am implementing a simple linked list in c++. I have a mistake and i don't see it :( #include <stdexcept> #include <iostream> struct Node { Node(Node *next, int value): next(next), value(value) { } Node *next; int value; }; class List { Node *first; // Erstes Element , 0 falls die Liste leer ist int len; // Laenge der liste Node *nthNode(int index); // Hilfsfunktion : O( index ) public: // Default - Konstruktor ( Laenge 0): O (1) List():first(0),len(0){ } // Copy - Konstruktor : O(other.len) List(const List & other){ }; // Zuweisungs - Operator O(len +other.len) List &operator=(const List &other) { clear(); if(!other.len) return *this; Node *it = first = new Node(0,other.first->value); for (Node *n = other.first->next; n; n = n->next) { it = it->next = new Node(0, n->value); } len = other.len; return *this; } // Destruktor ( gibt den Speicher von allen Nodes frei ): O( len ) ~List(){ }; // Haengt der Liste ein Element hinten an: O( len ) void push_back(int value){ }; // Fuegt am Anfang der Liste ein Element ein : O (1) void push_front(int value){ Node* front = new Node(0,value); if(first){ first = front; front->next = 0; }else{ front->next = first; first = front; } len++; }; // gibt eine Referenz auf das index -te Element zurueck : O( index ) int &at(int index){ int count = 0 ; int ret ; Node *it = first; for (Node *n = first->next; n; n = n->next) { if(count==index) ret = n->value; count++; } return ret ; }; // Entfernt alle Elemente : O(len) void clear(){ }; // Zeigt alle Elemente an: hier : O( len * len ) void show() { std::cout << " List [" << len << " ]:{ "; for (int i = 0; i < len; ++i) { std::cout << at(i) << (i == len - 1 ? '}' : ','); } std::cout << std::endl; } }; /* * */ int main() { List l; // l. push_back(1); // l. push_back(2); l. push_front(7); l. push_front(8); l. push_front(9); l.show(); // List(l). show(); } it works ... but the output is : List [3 ]:{ 0,134520896,9484585}

    Read the article

  • Nginx no longer servers uwsgi application behind HAProxy - Looks for static file instead

    - by Ralph
    We implemented our web application using web2py. It consists of several modules offering a REST API at various resources (e.g. /dids, /replicas, ...). The API is used by clients implementing requests.py. My problem is that our web app works fine if it's behind HAProxy and hosted by Apache using mod_wsgi. It also works fine if the clients interact with nginx directly. It doesn't work though when using HAProxy in front of nginx. My guess is that HAProxy somehow modifies the request and thus nginx behaves differently i.e. looking for a static file instead of calling the WSGI container. Unfortunately I can't figure out what's exactly going (wr)on(g). Here are the relevant config sections of these three component's config files. At least I guess they are interesting. If you miss anything, please let me know. 1) haproxy.conf frontend app-lb bind loadbalancer:443 ssl crt /etc/grid-security/hostcertkey.pem default_backend nginx-servers mode http backend nginx-servers balance leastconn option forwardfor server nginx-01 nginx-server-int-01.domain.com:80 check 2) nginx.conf: sendfile off; #tcp_nopush on; keepalive_timeout 65; include /etc/nginx/conf.d/*.conf; server { server_name nginx-server-int-01.domain.com; root /path/to/app/; location / { uwsgi_pass unix:///tmp/app.sock; include uwsgi_params; uwsgi_read_timeout 600; # Requests can run for a serious long time } 3) uwsgi.ini [uwsgi] chdir = /path/to/app/ chmod-socket = 777 no-default-app = True socket = /tmp/app.sock manage-script-name = True mount = /dids=did.py mount = /replicas=replica.py callable = application Now when I let my clients go against nginx-server-int-01.domain.com everything is fine. In the access.log of nginx lines like these are appearing: 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /dids/attachments HTTP/1.1" 201 17 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /dids/attachments HTTP/1.1" 201 17 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /dids/user.ogueta/cnt_mc12_8TeV.16304.stream_name_too_long.other.notype.004202218365415e990b9997ea859f20.user/dids HTTP/1.1" 201 17 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /replicas/list HTTP/1.1" 200 5282 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /replicas/list HTTP/1.1" 200 5094 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /replicas/list HTTP/1.1" 200 528 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:21 +0200] "GET /dids/mc13_14TeV/dids/search?project=mc13_14TeV&stream_name=%2Adummy&type=dataset&datatype=NTUP_SMDYMUMU HTTP/1.1" 401 73 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:21 +0200] "POST /replicas/list HTTP/1.1" 200 713 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:21 +0200] "POST /dids/attachments HTTP/1.1" 201 17 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" But when I switch the clients to go against HAProxy (loadbalancer.domain.com:443), the error.log of nginx shows lines like these: 2014/08/23 01:26:01 [error] 1705#0: *21231 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XX1, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21232 open() "/usr/share/nginx/html/replicas/list" failed (2: No such file or directory), client: 128.142.XXX.XX1, server: localhost, request: "POST /replicas/list HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21233 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XX1, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21234 open() "/usr/share/nginx/html/replicas/list" failed (2: No such file or directory), client: 128.142.XXX.XX1, server: localhost, request: "POST /replicas/list HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21235 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer" 2014/08/23 01:26:02 [error] 1705#0: *21238 open() "/usr/share/nginx/html/replicas/list" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /replicas/list HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21239 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21242 open() "/usr/share/nginx/html/replicas/list" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /replicas/list HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21244 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer.domain.com" As you can see, that request looks the same, only the client IP changed, from the client's host to the one from loadbalancer.domain.com. But due to what ever reasons ngxin seems to assume that it is a static file to be served which eventually results in the file not found message. I searched the web for multiple hours already, but without much luck so far. Any help is very much appreciated. Cheers, Ralph

    Read the article

  • Why does virtual assignment behave differently than other virtual functions of the same signature?

    - by David Rodríguez - dribeas
    While playing with implementing a virtual assignment operator I have ended with a funny behavior. It is not a compiler glitch, since g++ 4.1, 4.3 and VS 2005 share the same behavior. Basically, the virtual operator= behaves differently than any other virtual function with respect to the code that is actually being executed. struct Base { virtual Base& f( Base const & ) { std::cout << "Base::f(Base const &)" << std::endl; return *this; } virtual Base& operator=( Base const & ) { std::cout << "Base::operator=(Base const &)" << std::endl; return *this; } }; struct Derived : public Base { virtual Base& f( Base const & ) { std::cout << "Derived::f(Base const &)" << std::endl; return *this; } virtual Base& operator=( Base const & ) { std::cout << "Derived::operator=( Base const & )" << std::endl; return *this; } }; int main() { Derived a, b; a.f( b ); // [0] outputs: Derived::f(Base const &) (expected result) a = b; // [1] outputs: Base::operator=(Base const &) Base & ba = a; Base & bb = b; ba = bb; // [2] outputs: Derived::operator=(Base const &) Derived & da = a; Derived & db = b; da = db; // [3] outputs: Base::operator=(Base const &) ba = da; // [4] outputs: Derived::operator=(Base const &) da = ba; // [5] outputs: Derived::operator=(Base const &) } The effect is that the virtual operator= has a different behavior than any other virtual function with the same signature ([0] compared to [1]), by calling the Base version of the operator when called through real Derived objects ([1]) or Derived references ([3]) while it does perform as a regular virtual function when called through Base references ([2]), or when either the lvalue or rvalue are Base references and the other a Derived reference ([4],[5]). Is there any sensible explanation to this odd behavior?

    Read the article

  • which control in vs08 aspx c# will be able to take html tags as input and display it formatted accor

    - by user287745
    i asked a few questions regarding this and got answers that indicate i will have to make an costum build editor using ,markdown, .... to achieve something like stackoverflow.com/questions/ask page. there is time limitation and the requirements are that much, i have to achieve 1) allow users to using html tags when giving input 2) save that complete input to sql db 3) display the data in db withing a contol which renders the formatting as the tags specify. i am aware label and literal controls support html tags, the problem is how to allow the user to input the textbox does not seem the support html tags? thank you Need help in implementing the affect of the WRITE A NEW POST On stackoverflow.com <%--The editor--% <asp:UpdatePanel ID="UpdatePanel7" runat="server"> <ContentTemplate> <asp:Label ID="Label4" runat="server" Text="Label"></asp:Label> <div id="textchanging" onkeyup="textoftextbox('TextBox3'); return false;"> <asp:TextBox ID="TextBox3" runat="server" CausesValidation="True" ></asp:TextBox> </div> <%-- OnTextChanged="textoftextbox('TextBox3'); return false;" gives too man literals error. --%> <asp:Button ID="Button6" runat="server" Text="Post The Comment, The New, Write on Wall" onclick="Button6_Click" /> <asp:Label ID="Label2" runat="server" Text="Label"></asp:Label> <asp:TextBox ID="TextBox4" runat="server" CausesValidation="True"></asp:TextBox> <asp:Literal ID="Literal2" runat="server" Text="this must change" here comes the div <div id="displayingarea" runat="server">This area will change</div> </ContentTemplate> </asp:UpdatePanel> <%--The editor ends--%> protected void Button6_Click(object sender, EventArgs e) { Label3.Text = displayingarea.InnerHtml; } As you see, I have implemented the effect of typing text in textbox which appears as a reflection in the div WITH FULLY FORMATTED TEXT ACCORDIG TO THE HTML TAGS USED The textbox does not allow html! Well the text box doesnot have too, script just extracts what is typed, within letting the server know. The html of the div tag also changes as typed in textbox. Now, There is a “post” button within an update plane a avoid full post back of page What I needis when the button is clicked on the value withing the div tag “innerhtml” is passed on to the label. Yes I know the chnages are only made on the client side, So when the button click event occurs the server is not aware of the new data within the div tag, Therefore the server assign the original html withing the div to the lab. Need help to overcome this, How is it that what we enter text in the textbox press the button with coding like label1.text=textbox1.text; and it works even in update panel, but the aabove code for extracting innerhtml typed at users end similar to yping in textbox does not work?

    Read the article

  • Help with 2-part question on ASP.NET MVC and Custom Security Design

    - by JustAProgrammer
    I'm using ASP.NET MVC and I am trying to separate a lot of my logic. Eventually, this application will be pretty big. It's basically a SaaS app that I need to allow for different kinds of clients to access. I have a two part question; the first deals with my general design and the second deals with how to utilize in ASP.NET MVC Primarily, there will initially be an ASP.NET MVC "client" front-end and there will be a set of web-services for third parties to interact with (perhaps mobile, etc). I realize I could have the ASP.NET MVC app interact just through the Web Service but I think that is unnecessary overhead. So, I am creating an API that will essentially be a DLL that the Web App and the Web Services will utilize. The API consists of the main set of business logic and Data Transfer Objects, etc. (So, this includes methods like CreateCustomer, EditProduct, etc for example) Also, my permissions requirements are a little complicated. I can't really use a straight Roles system as I need to have some fine-grained permissions (but all permissions are positive rights). So, I don't think I can really use the ASP.NET Roles/Membership system or if I can it seems like I'd be doing more work than rolling my own. I've used Membership before and for this one I think I'd rather roll my own. Both the Web App and Web Services will need to keep security as a concern. So, my design is kind of like this: Each method in the API will need to verify the security of the caller In the Web App, each "page" ("action" in MVC speak) will also check the user's permissions (So, don't present the user with the "Add Customer" button if the user does not have that right but also whenever the API receives AddCustomer(), check the security too) I think the Web Service really needs the checking in the DLL because it may not always be used in some kind of pre-authenticated context (like using Session/Cookies in a Web App); also having the security checks in the API means I don't really HAVE TO check it in other places if I'm on a mobile (say iPhone) and don't want to do all kinds of checking on the client However, in the Web App I think there will be some duplication of work since the Web App checks the user's security before presenting the user with options, which is ok, but I was thinking of a way to avoid this duplication by allowing the Web App to tell the API not check the security; while the Web Service would always want security to be verified Is this a good method? If not, what's better? If so, what's a good way of implementing this. I was thinking of doing this: In the API, I would have two functions for each action: // Here, "Credential" objects are just something I made up public void AddCustomer(string customerName, Credential credential , bool checkSecurity) { if(checkSecurity) { if(Has_Rights_To_Add_Customer(credential)) // made up for clarity { AddCustomer(customerName); } else // throw an exception or somehow present an error } else AddCustomer(customerName); } public void AddCustomer(string customerName) { // actual logic to add the customer into the DB or whatever // Would it be good for this method to verify that the caller is the Web App // through some method? } So, is this a good design or should I do something differently? My next question is that clearly it doesn't seem like I can really use [Authorize ...] for determining if a user has the permissions to do something. In fact, one action might depend on a variety of permissions and the View might hide or show certain options depending on the permission. What's the best way to do this? Should I have some kind of PermissionSet object that the user carries around throughout the Web App in Session or whatever and the MVC Action method would check if that user can use that Action and then the View will have some ViewData or whatever where it checks the various permissions to do Hide/Show?

    Read the article

  • AES BYTE SYSTOLIC ARCHITECTURE.

    - by anum
    we are implementing AES BYTE SYSTOLIC ARCHITECTURE. CODE:- module key_expansion(kld,clk,key,key_expand,en); input kld,clk,en; input [127:0] key; wire [31:0] w0,w1,w2,w3; output [127:0] key_expand; reg[127:0] key_expand; reg [31:0] w[3:0]; reg [3:0] ctr; //reg [31:0] w0,w1,w2,w3; wire [31:0] c0,c1,c2,c3; wire [31:0] tmp_w; wire [31:0] subword; wire [31:0] rcon; assign w0 = w[0]; assign w1 = w[1]; assign w2 = w[2]; assign w3 = w[3]; //always @(posedge clk) always @(posedge clk) begin w[0] <= #1 kld ? key[127:096] : w[0]^subword^rcon; end always @(posedge clk) begin w[1] <= #1 kld ? key[095:064] : w[0]^w[1]^subword^rcon; end always @(posedge clk) begin w[2] <= #1 kld ? key[063:032] : w[0]^w[2]^w[1]^subword^rcon; end always @(posedge clk) begin w[3] <= #1 kld ? key[031:000] : w[0]^w[3]^w[2]^w[1]^subword^rcon; end assign tmp_w = w[3]; aes_sbox u0( .a(tmp_w[23:16]), .d(subword[31:24])); aes_sbox u1( .a(tmp_w[15:08]), .d(subword[23:16])); aes_sbox u2( .a(tmp_w[07:00]), .d(subword[15:08])); aes_sbox u3( .a(tmp_w[31:24]), .d(subword[07:00])); aes_rcon r0( .clk(clk), .kld(kld), .out_rcon(rcon)); //assign key_expand={w0,w1,w2,w3}; //assign key_expand={w0,w1,w2,w3}; always@(posedge clk) begin if (!en) begin ctr<=0; end else if (|ctr) begin key_expand<=0; ctr<=(ctr+1)%16; end else if (!(|ctr)) begin key_expand<={w0,w1,w2,w3}; ctr<=(ctr+1)%16; end end endmodule problem:verilog code has been attached THE BASIC problem is that we want to generate a new key after 16 clock cycles.whereas initially it would generate a new key every posedge of clock.in order to stop the value from being assigned to w[0] w[1] w[2] w[3] we implemented an enable counter logic as under.it has enabled us to give output in key_expand after 16 cycles but the value of required keys has bin changed.because the key_expand takes up the latest value from w[0],w[1],w[2],w[3] where as we require the first value generated.. we should block the value to be assigned to w[0] to w[3] somehow ..but we are stuck.plz help.

    Read the article

  • Update to Easy Slider 1.7 made all my JQuery code stop working.

    - by Anders H
    I'm pretty novice as a JQuery user goes. I've got some experience implementing different plugins but would be lost trying to customize my own. I can't share the exact site details with you due to a NDA, so I hope someone can give me a little help. I've got a project due today (Just HTML/CSS/JQuery). It has a lightbox, show/hide login menu and a slider is Easy Slider 1.5. Everything was working together, until I attempted to update to Easy Slider 1.7 (see link on same page, I'm too new to post more than 1 link). When I did so, JQuery stopped working for all the plugins. I've attempted to revert back to the original state, by undoing my work (didn't do much), ad JQuery remains broken. Firebug Error Console shared no errors. I can't find anything in the code no matter how hard I look at it. Can anyone help me troubleshoot this JQuery problem? Delivery is supposed to be tonight for the project. EDIT: Generic header info: <!-- Global Style Sheet --> <link rel="stylesheet" href="style.css" media="screen" type="text/css" /> <!-- Cufon --> <script src="cufon/cufon.js" type="text/javascript"></script> <script src="cufon/gotham_325-gotham_350.font.js" type="text/javascript"></script> <!-- jQuery Javascript --> <script type="text/javascript" language="javascript" src="js/jquery-1.3.2.min.js"></script> <script type="text/javascript" language="javascript" src="js/jquery-ui-1.7.1.custom.min.js"></script> <script type="text/javascript" language="javascript" src="js/jquery.colorbox.js"></script> <script type="text/javascript" language="javascript" src="js/global.js"></script> <script type="text/javascript" language="javascript" src="js/home.js"></script> <script type="text/javascript"> $(document).ready(function() { $(".signin").click(function(e) { e.preventDefault(); $("fieldset#signin_menu").toggle(); $(".signin").toggleClass("menu-open"); }); $("fieldset#signin_menu").mouseup(function() { return false }); $(document).mouseup(function(e) { if($(e.target).parent("a.signin").length==0) { $(".signin").removeClass("menu-open"); $("fieldset#signin_menu").hide(); } }); }); </script> <script src="javascripts/jquery.tipsy.js" type="text/javascript"></script> <script type='text/javascript'> $(function() { $('#forgot_username_link').tipsy({gravity: 'w'}); }); </script> <script type="text/javascript" language="javascript" src="js/easySlider1.5.js"></script> <script type="text/javascript"> $(document).ready(function(){ $("#slider").easySlider(); }); </script> <script type="text/javascript"> $(document).ready(function(){ $(".regbox").colorbox({iframe:true, innerWidth:270, innerHeight:270}); }); </script>

    Read the article

< Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >