Search Results

Search found 23127 results on 926 pages for 'based'.

Page 908/926 | < Previous Page | 904 905 906 907 908 909 910 911 912 913 914 915  | Next Page >

  • Opinions on Copy Protection / Software Licensing via phoning home?

    - by Jakobud
    I'm developing some software that I'm going to eventually sell. I've been thinking about different copy protection mechanisms, both custom and 3rd party. I know that no copy protection is 100% full-proof, but I need to at least try. So I'm looking for some opinions to my approach I'm thinking about: One method I'm thinking about is just having my software connect to a remote server when it starts up, in order to verify the license based off the MAC address of the ethernet port. I'm not sure if the server would be running a MySQL database that retrieves the license information, or what... Is there a more simple way? Maybe some type of encrypted file that is read? I would make the software still work if it can't connect to the server. I don't want to lock someone out just because they don't have internet access at that moment in time. In case you are wondering, the software I'm developing is extremely internet/network dependant. So its actually quite unlikely that the user wouldn't have internet access when using it. Actually, its pretty useless without internet/network access. Anyone know what I would do about computers that have multiple MAC addresses? A lot of motherboards these days have 2 ethernet ports. And most laptops have 1 ethernet, 1 wifi and Bluetooth MAC addresses. I suppose I could just pick a MAC port and run with it. Not sure if it really matters A smarty and tricky user could determine the server that the software is connecting to and perhaps add it to their host file so that it always trys to connect to localhost. How likely do you think this is? And do you think its possible for the software to check if this is being done? I guess parsing of the host file could always work. Look for your server address in there and see if its connecting to localhost or something. I've considered dongles, but I'm trying to avoid them just because I know they are a pain to work with. Keeping them updated and possibly requiring the customer to run their own license server is a bit too much for me. I've experienced that and it's a bit of a pain that I wouldn't want to put my customers through. Also I'm trying to avoid that extra overhead cost of using 3rd party dongles. Also, I'm leaning toward connecting to a remote server to verify authentication as opposed to just sending the user some sort of license file because what happens when the user buys a new computer? I have to send them a replacement license file that will work with their new computer, but they will still be able to use it on their old computer as well. There is no way for me to 'de-authorize' their old computer without asking them to run some program on it or something. Also, one important note, with the software I would make it very clear to the user in the EULA that the software connects to a remote server to verify licensing and that no personal information is sent. I know I don't care much for software that does that kinda stuff without me knowing. Anyways, just looking for some opinions for people who have maybe gone down this kinda road. It seems like remote-server-dependent-software would be one of the most effective copy-protection mechanisms, not just because of difficulty of circumventing, but also could be pretty easy to manage the licenses on the developers end.

    Read the article

  • I never really understood: what is Application Binary Interface (ABI)?

    - by claws
    I never clearly understood what is an ABI. I'm sorry for such a lengthy question. I just want to clearly understand things. Please don't point me to wiki article, If could understand it, I wouldn't be here posting such a lengthy post. This is my mindset about different interfaces: TV remote is an interface between user and TV. It is an existing entity but useless (doesn't provide any functionality) by itself. All the functionality for each of those buttons on the remote is implemented in the Television set. Interface: It is a "existing entity" layer between the functionality and consumer of that functionality. An, interface by itself is doesn't do anything. It just invokes the functionality lying behind. Now depending on who the user is there are different type of interfaces. Command Line Interface(CLI) commands are the existing entities, consumer is the user and functionality lies behind. functionality: my software functionality which solves some purpose to which we are describing this interface. existing entities: commands consumer: user Graphical User Interface(GUI) window,buttons etc.. are the existing entities, again consumer is the user and functionality lies behind. functionality: my software functionality which solves some purpose to which we are describing this interface. existing entities: window,buttons etc.. consumer: user Application Programming Interface(API) functions or to be more correct, interfaces (in interfaced based programming) are the existing entities, consumer here is another program not a user. and again functionality lies behind this layer. functionality: my software functionality which solves some purpose to which we are describing this interface. existing entities: functions, Interfaces(array of functions). consumer: another program/application. Application Binary Interface (ABI) Here is my problem starts. functionality: ??? existing entities: ??? consumer: ??? I've wrote few softwares in different languages and provided different kind of interfaces (CLI, GUI, API) but I'm not sure, if I ever, provided any ABI. http://en.wikipedia.org/wiki/Application_binary_interface says: ABIs cover details such as data type, size, and alignment; the calling convention, which controls how functions' arguments are passed and return values retrieved; the system call numbers and how an application should make system calls to the operating system; Other ABIs standardize details such as the C++ name mangling,[2] . exception propagation,[3] and calling convention between compilers on the same platform, but do not require cross-platform compatibility. Who needs these details? Please don't say, OS. I know assembly programming. I know how linking & loading works. I know what exactly happens inside. Where did C++ name mangling come in between? I thought we are talking at the binary level. Where did languages come in between? anyway, I've downloaded the [PDF] System V Application Binary Interface Edition 4.1 (1997-03-18) to see what exactly it contains. Well, most of it didn't make any sense. Why does it contain 2 chapters (4th & 5th) which describe the ELF file format.Infact, these are the only 2 significant chapters that specification. Rest of all the chapters "Processor Specific". Anyway, I thought that it is completely different topic. Please don't say that ELF file format specs are the ABI. It doesn't qualify to be Interface according to the definition. I know, since we are talking at such low level it must be very specific. But I'm not sure how is it "Instruction Set Architecture(ISA)" specific? Where can I find MS Window's ABI? So, these are the major queries that are bugging me.

    Read the article

  • Same source, multiple targets with different resources (Visual Studio .Net 2008)

    - by Mike Bell
    A set of software products differ only by their resource strings, binary resources, and by the strings / graphics / product keys used by their Visual Studio Setup projects. What is the best way to create, organize, and maintain them? i.e. All the products essentially consist of the same core functionality customized by graphics, strings, and other resource data to form each product. Imagine you are creating a set of products like "Excel for Bankers", Excel for Gardeners", "Excel for CEOs", etc. Each product has the the same functionality, but differs in name, graphics, help files, included templates etc. The environment in which these are being built is: vanilla Windows.Forms / Visual Studio 2008 / C# / .Net. The ideal solution would be easy to maintain. e.g. If I introduce a new string / new resource projects I haven't added the resource to should fail at compile time, not run time. (And subsequent localization of the products should also be feasible). Hopefully I've missed the blindingly-obvious and easy way of doing all this. What is it? ============ Clarification(s) ================ By "product" I mean the package of software that gets installed by the installer and sold to the end user. Currently I have one solution, consisting of multiple projects, (including a Setup project), which builds a set of assemblies and create a single installer. What I need to produce are multiple products/installers, all with similar functionality, which are built from the same set of assemblies but differ in the set of resources used by one of the assemblies. What's the best way of doing this? ------------ The 95% Solution ----------------- Based upon Daminen_the_unbeliever's answer, a resource file per configuration can be achieved as follows: Create a class library project ("Satellite"). Delete the default .cs file and add a folder ("Default") Create a resource file in the folder "MyResources" Properties - set CustomToolNamespace to something appropriate (e.g. "XXX") Make sure the access modifier for the resources is "Public". Add the resources. Edit the source code. Refer to the resources in your code as XXX.MyResources.ResourceName) Create Configurations for each product variant ("ConfigN") For each product variant, create a folder ("VariantN") Copy and Paste the MyResources file into each VariantN folder Unload the "Satellite" project, and edit the .csproj file For each "VariantN/MyResources" <Compile> or <EmbeddedResource> tag, add a Condition="'$(Configuration)' == 'ConfigN'" attribute. Save, Reload the .csproj, and you're done... This creates a per-configuration resource file, which can (presumably) be further localized. Compile error messages are produced for any configuration that where a a resource is missing. The resource files can be localized using the standard method (create a second resources file (MyResources.fr.resx) and edit .csproj as before). The reason this is a 95% solution is that resources used to initialize forms (e.g. Form Titles, button texts) can't be easily handled in the same manner - the easiest approach seems to be to overwrite these with values from the satellite assembly.

    Read the article

  • PHP-How to Pass Multiple Value In Form Field

    - by Tall boY
    hi i have a php based sorting method with drop down menu to sort no of rows, it is working fine. i have another sorting links to sort id & title, it is also working fine. but together they are not working fine. what happens is that when i sort(say by title) using links, result gets sorted by title, then if i sort rows using drop down menu rows get sorted but result gets back to default of id sort. sorting codes for id & tite is if ($orderby == 'title' && $sortby == 'asc') {echo " <li id='scurrent'><a href='?rpp=$rowsperpage&order=title&sort=asc'>title-asc:</a></li> ";} else {echo " <li><a href='?rpp=$rowsperpage&order=title&sort=asc'>title-asc:</a></li> ";} if ($orderby == 'title' && $sortby == 'desc') {echo " <li id='scurrent'><a href='?rpp=$rowsperpage&order=title&sort=desc'>title-desc:</a></li> ";} else {echo " <li><a href='?rpp=$rowsperpage&order=title&sort=desc'>title-desc:</a></li> ";} if ($orderby == 'id' && $sortby == 'asc') {echo " <li id='scurrent'><a href='?rpp=$rowsperpage&order=id&sort=asc'>id-asc:</a></li> ";} else {echo " <li><a href='?rpp=$rowsperpage&order=id&sort=asc'>id-asc:</a></li> ";} if ($orderby == 'id' && $sortby == 'desc') {echo " <li id='scurrent'><a href='?rpp=$rowsperpage&order=id&sort=desc'>id-desc:</a></li> ";} else {echo " <li><a href='?rpp=$rowsperpage&order=id&sort=desc'>id-desc:</a></li> ";} ?> sorting codes for rows is <form action="is-test.php" method="get"> <select name="rpp" onchange="this.form.submit()"> <option value="10" <?php if ($rowsperpage == 10) echo 'selected="selected"' ?>>10</option> <option value="20" <?php if ($rowsperpage == 20) echo 'selected="selected"' ?>>20</option> <option value="30" <?php if ($rowsperpage == 30) echo 'selected="selected"' ?>>30</option> </select> </form> this method passes only rows per page(rpp) into url. i want it to pass order, sort& rpp. is there a way around to pass multiple values in form fields like this. <form action="is-test.php" method="get"> <select name="rpp, order, sort" onchange="this.form.submit()"> <option value="10, $orderby, $sortby" <?php if ($rowsperpage == 10) echo 'selected="selected"' ?>>10</option> <option value="20, $orderby, $sortby" <?php if ($rowsperpage == 20) echo 'selected="selected"' ?>>20</option> <option value="30, $orderby, $sortby" <?php if ($rowsperpage == 30) echo 'selected="selected"' ?>>30</option> </select> </form> this may seem silly but it just to give you an idea of what i am trying to implement,(i am very new to php) please suggest any way to make this work. thanks

    Read the article

  • Mapping integers to types using C++ template fails in a specific case

    - by Shailesh Kumar
    I am attempting to compile the following template based code in VC++ 2005. #include <iostream> using namespace std; /* * T is a template which maps an integer to a specific type. * The mapping happens through partial template specialization. * In the following T<1> is mapped to char, T<2> is mapped to long * and T<3> is mapped to float using partial template specializations */ template <int x> struct T { public: }; template<> struct T<1> { public: typedef char xType; }; template<> struct T<2> { public: typedef long xType; }; template<> struct T<3> { public: typedef float xType; }; // We can easily access the specific xType for a specific T<N> typedef T<3>::xType x3Type; /*! * In the following we are attempting to use T<N> inside another * template class T2<R> */ template<int r> struct T2 { //We can map T<r> to some other type T3 typedef T<r> T3; // The following line fails typedef T3::xType xType; }; int main() { T<1>::xType a1; cout << typeid(a1).name() << endl; T<2>::xType a2; cout << typeid(a2).name() << endl; T<3>::xType a3; cout << typeid(a3).name() << endl; return 0; } There is a particular line in the code which doesn't compile: typedef T3::xType xType; If I remove this line, compilation goes fine and the result is: char long float If I retain this line, compilation errors are observed. main.cpp(53) : warning C4346: 'T<x>::xType' : dependent name is not a type prefix with 'typename' to indicate a type main.cpp(54) : see reference to class template instantiation 'T2<r>' being compiled main.cpp(53) : error C2146: syntax error : missing ';' before identifier 'xType' main.cpp(53) : error C4430: missing type specifier - int assumed. Note: C++ does not support default-int I am not able to figure out how to make sure that T::xType can be treated as a type inside the T2 template. Any help is highly appreciated.

    Read the article

  • Problems with Asynchronous UDP Sockets

    - by ihatenetworkcoding
    Hi, I'm struggling a bit with socket programming (something I'm not at all familiar with) and I can't find anything which helps from google or MSDN (awful). Apologies for the length of this. Basically I have an existing service which recieves and responds to requests over UDP. I can't change this at all. I also have a client within my webapp which dispatches and listens for responses to that service. The existing client I've been given is a singleton which creates a socket and an array of response slots, and then creates a background thread with an infinite looping method that makes "sock.Receive()" calls and pushes the data received into the slot array. All kinds of things about this seem wrong to me and the infinite thread breaks my unit testing so I'm trying to replace this service with one which makes it's it's send/receives asynchronously instead. Point 1: Is this the right approach? I want a non-blocking, scalable, thread-safe service. My first attempt is roughly like this, which sort of worked but the data I got back was always shorter than expected (i.e. the buffer did not have the number of bytes requested) and seemed to throw exceptions when processed. private Socket MyPreConfiguredSocket; public object Query() { //build a request this.MyPreConfiguredSocket.SendTo(MYREQUEST, packet.Length, SocketFlags.Multicast, this._target); IAsyncResult h = this._sock.BeginReceiveFrom(response, 0, BUFFER_SIZE, SocketFlags.None, ref this._target, new AsyncCallback(ARecieve), this._sock); if (!h.AsyncWaitHandle.WaitOne(TIMEOUT)) { throw new Exception("Timed out"); } //process response data (always shortened) } private void ARecieve (IAsyncResult result) { int bytesreceived = (result as Socket).EndReceiveFrom(result, ref this._target); } My second attempt was based on more google trawling and this recursive pattern I frequently saw, but this version always times out! It never gets to ARecieve. public object Query() { //build a request this.MyPreConfiguredSocket.SendTo(MYREQUEST, packet.Length, SocketFlags.Multicast, this._target); State s = new State(this.MyPreConfiguredSocket); this.MyPreConfiguredSocket.BeginReceiveFrom(s.Buffer, 0, BUFFER_SIZE, SocketFlags.None, ref this._target, new AsyncCallback(ARecieve), s); if (!s.Flag.WaitOne(10000)) { throw new Exception("Timed out"); } //always thrown //process response data } private void ARecieve (IAsyncResult result) { //never gets here! State s = (result as State); int bytesreceived = s.Sock.EndReceiveFrom(result, ref this._target); if (bytesreceived > 0) { s.Received += bytesreceived; this._sock.BeginReceiveFrom(s.Buffer, s.Received, BUFFER_SIZE, SocketFlags.None, ref this._target, new AsyncCallback(ARecieve), s); } else { s.Flag.Set(); } } private class State { public State(Socket sock) { this._sock = sock; this._buffer = new byte[BUFFER_SIZE]; this._buffer.Initialize(); } public Socket Sock; public byte[] Buffer; public ManualResetEvent Flag = new ManualResetEvent(false); public int Received = 0; } Point 2: So clearly I'm getting something quite wrong. Point 3: I'm not sure if I'm going about this right. How does the data coming from the remote service even get to the right listening thread? Do I need to create a socket per request? Out of my comfort zone here. Need help.

    Read the article

  • Optimizing JS Array Search

    - by The.Anti.9
    I am working on a Browser-based media player which is written almost entirely in HTML 5 and JavaScript. The backend is written in PHP but it has one function which is to fill the playlist on the initial load. And the rest is all JS. There is a search bar that refines the playlist. I want it to refine as the person is typing, like most media players do. The only problem with this is that it is very slow and laggy as there are about 1000 songs in the whole program and there is likely to be more as time goes on. The original playlist load is an ajax call to a PHP page that returns the results as JSON. Each item has 4 attirbutes: artist album file url I then loop through each object and add it to an array called playlist. At the end of the looping a copy of playlist is created, backup. This is so that I can refine the playlist variable when people refine their search, but still repopulated it from backup without making another server request. The method refine() is called when the user types a key into the searchbox. It flushes playlist and searches through each property (not including url) of each object in the backup array for a match in the string. If there is a match in any of the properties, it appends the information to a table that displays the playlist, and adds it to the object to playlist for access by the actual player. Code for the refine() method: function refine() { $('#loadinggif').show(); $('#library').html("<table id='libtable'><tr><th>Artist</th><th>Album</th><th>File</th><th>&nbsp;</th></tr></table>"); playlist = []; for (var j = 0; j < backup.length; j++) { var sfile = new String(backup[j].file); var salbum = new String(backup[j].album); var sartist = new String(backup[j].artist); if (sfile.toLowerCase().search($('#search').val().toLowerCase()) !== -1 || salbum.toLowerCase().search($('#search').val().toLowerCase()) !== -1 || sartist.toLowerCase().search($('#search').val().toLowerCase()) !== -1) { playlist.push(backup[j]); num = playlist.length-1; $("<tr></tr>").html("<td>" + num + "</td><td>" + sartist + "</td><td>" + salbum + "</td><td>" + sfile + "</td><td><a href='#' onclick='setplay(" + num +");'>Play</a></td>").appendTo('#libtable'); } } $('#loadinggif').hide(); } As I said before, for the first couple of letters typed, this is very slow and laggy. I am looking for ways to refine this to make it much faster and more smooth.

    Read the article

  • how to pass one variable value frm one class to the oder

    - by Arunabha
    i hav two packages one is com.firstBooks.series.db.parser which hav a java file XMLParser.java,i hav another package com.firstBooks.series79 which hav a class called AppMain.NW i want to send the value of a variable called _xmlFileName frm AppMain class to the xmlFile variable in XMLParser class,i am posting the codes for both the class,kindly help me. package com.firstBooks.series.db.parser; import java.io.IOException; import java.io.InputStream; import java.util.Vector; import net.rim.device.api.xml.parsers.DocumentBuilder; import net.rim.device.api.xml.parsers.DocumentBuilderFactory; import net.rim.device.api.xml.parsers.ParserConfigurationException; import org.w3c.dom.Document; import org.w3c.dom.Element; import org.w3c.dom.NodeList; import org.xml.sax.SAXException; import com.firstBooks.series.db.Question; public class XMLParser { private Document document; public static Vector questionList; public static String xmlFile; public XMLParser() { questionList = new Vector(); } public void parseXMl() throws SAXException, IOException, ParserConfigurationException { // Build a document based on the XML file. DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance(); DocumentBuilder builder = factory.newDocumentBuilder(); InputStream inputStream = getClass().getResourceAsStream(xmlFile); document = builder.parse(inputStream); } public void parseDocument() { Element element = document.getDocumentElement(); NodeList nl = element.getElementsByTagName("question"); if (nl != null && nl.getLength() > 0) { for (int i = 0; i < nl.getLength(); i++) { Element ele = (Element) nl.item(i); Question question = getQuestions(ele); questionList.addElement(question); } } } private Question getQuestions(Element element) { String title = getTextValue(element, "title"); String choice1 = getTextValue(element, "choice1"); String choice2 = getTextValue(element, "choice2"); String choice3 = getTextValue(element, "choice3"); String choice4 = getTextValue(element, "choice4"); String answer = getTextValue(element, "answer"); String rationale = getTextValue(element, "rationale"); Question Questions = new Question(title, choice1, choice2, choice3, choice4, answer, rationale); return Questions; } private String getTextValue(Element ele, String tagName) { String textVal = null; NodeList nl = ele.getElementsByTagName(tagName); if (nl != null && nl.getLength() > 0) { Element el = (Element) nl.item(0); textVal = el.getFirstChild().getNodeValue(); } return textVal; } } Nw the code for AppMain class //#preprocess package com.firstBooks.series79; import net.rim.device.api.ui.UiApplication; import com.firstBooks.series.ui.screens.HomeScreen; public class AppMain extends UiApplication { public static String _xmlFileName; public static boolean _Lite; public static int _totalNumofQuestions; public static void initialize(){ //#ifndef FULL /* //#endif _xmlFileName = "/res/Series79_FULL.xml"; _totalNumofQuestions = 50; _Lite = false; //#ifndef FULL */ //#endif //#ifndef LITE /* //#endif _xmlFileName = "/res/Series79_LITE.xml"; _totalNumofQuestions = 10; _Lite = true; //#ifndef LITE */ //#endif } private AppMain() { initialize(); pushScreen(new HomeScreen()); } public static void main(String args[]) { new AppMain().enterEventDispatcher(); } }

    Read the article

  • Problem with table in php

    - by user225269
    I'm trying to create a table in php that would show the data on the mysql database based on the check box that is checked by the user. As you can see in this screen shot, it will have problems when you did not check on a checkbox before the one that will be the last: http://www.mypicx.com/04282010/1/ Here is my code: if($_POST['general'] == 'ADDRESS'){ $result2 = mysql_query("SELECT * FROM student WHERE ADDRESS='$saddress'"); ?> <table border='1'> <tr> <th>IDNO</th> <th>YEAR</th> <th>SECTION</th> <?php if ( $ShowLastName ) echo "<th>LASTNAME</th>" ?> <?php if ( $ShowFirstName ) echo "<th>FIRSTNAME</th>" ?> <?php if ( $ShowMidName ) echo "<th>MIDNAME</th>" ?> <?php if ( $ShowAddress ) echo "<th>ADDRESS</th>" ?> <?php if ( $ShowGender ) echo "<th>GENDER</th>" ?> <?php if ( $ShowReligion ) echo "<th>RELIGION</th>" ?> <?php if ( $ShowBday ) echo "<th>BIRTHDAY</th>" ?> <?php if ( $ShowContact ) echo "<th>CONTACT</th>" ?> </tr> <?php while($row = mysql_fetch_array($result2)) {?> <tr> <td><?php echo $row['IDNO']?> </td> <td><?php echo $row['YEAR'] ?> </td> <td><?php echo $row['SECTION'] ?></td> <td><?php if ( $ShowLastName ) echo $row['LASTNAME'] ?></td> <td><?php if ( $ShowFirstName ) echo $row['FIRSTNAME'] ?></td> <td><?php if ( $ShowMidName ) echo $row['MI'] ?></td> <td><?php if ( $ShowAddress ) echo $row['ADDRESS'] ?></td> <td><?php if ( $ShowGender ) echo $row['GENDER'] ?></td> <td><?php if ( $ShowReligion ) echo $row['RELIGION'] ?></td> <td><?php if ( $ShowBday ) echo $row['BIRTHDAY'] ?></td> <td><?php if ( $ShowContact ) echo $row['S_CONTACTNUM'] ?></td> </tr> <?PHP } ?> </table> <?PHP } mysql_close($con); ?> What can you recommend so that the output will not look like this when you one of the checkbox before a checkbox is not clicked: http://www.mypicx.com/04282010/2/

    Read the article

  • Is this a reasonable way to handle getters/setters in a PHP class?

    - by Mark Biek
    I'm going to try something with the format of this question and I'm very open to suggestions about a better way to handle it. I didn't want to just dump a bunch of code in the question so I've posted the code for the class on refactormycode. base-class-for-easy-class-property-handling My thought was that people can either post code snippets here or make changes on refactormycode and post links back to their refactorings. I'll make upvotes and accept an answer (assuming there's a clear "winner") based on that. At any rate, on to the class itself: I see a lot of debate about getter/setter class methods and is it better to just access simple property variables directly or should every class have explicit get/set methods defined, blah blah blah. I like the idea of having explicit methods in case you have to add more logic later. Then you don't have to modify any code that uses the class. However I hate having a million functions that look like this: public function getFirstName() { return $this->firstName; } public function setFirstName($firstName) { return $this->firstName; } Now I'm sure I'm not the first person to do this (I'm hoping that there's a better way of doing it that someone can suggest to me). Basically, the PropertyHandler class has a __call magic method. Any methods that come through __call that start with "get" or "set" are then routed to functions that set or retrieve values into an associative array. The key into the array is the name of the calling method after get or set. So, if the method coming into __call is "getFirstName", the array key is "FirstName". I liked using __call because it will automatically take care of the case where the subclass already has a "getFirstName" method defined. My impression (and I may be wrong) is that the __get & __set magic methods don't do that. So here's an example of how it would work: class PropTest extends PropertyHandler { public function __construct() { parent::__construct(); } } $props = new PropTest(); $props->setFirstName("Mark"); echo $props->getFirstName(); Notice that PropTest doesn't actually have "setFirstName" or "getFirstName" methods and neither does PropertyHandler. All that's doing is manipulating array values. The other case would be where your subclass is already extending something else. Since you can't have true multiple inheritance in PHP, you can make your subclass have a PropertyHandler instance as a private variable. You have to add one more function but then things behave in exactly the same way. class PropTest2 { private $props; public function __construct() { $this->props = new PropertyHandler(); } public function __call($method, $arguments) { return $this->props->__call($method, $arguments); } } $props2 = new PropTest2(); $props2->setFirstName('Mark'); echo $props2->getFirstName(); Notice how the subclass has a __call method that just passes everything along to the PropertyHandler __call method. Another good argument against handling getters and setters this way is that it makes it really hard to document. In fact, it's basically impossible to use any sort of document generation tool since the explicit methods to be don't documented don't exist. I've pretty much abandoned this approach for now. It was an interesting learning exercise but I think it sacrifices too much clarity.

    Read the article

  • Weird GWT issue causing IE threads to skyrocket.

    - by WesleyJohnson
    I'm not sure if this is an issue with GWT, JavaScript, Java, IE or just poor programming, but I'll try to explain. We're implementing web based chat program at work and some of our users have unreliable connections. So we're running into issues where they send out a new message and after x number of milliseconds have passed, the XHR request timesout and the client tries to resend the message again. The issue we ran into was sometimes the message would make it to the server and into the DB, but the XHR request wouldn't make it back to the client so the client was essentially retrying requests that had alread made it to the server. To mitigate this issue, we now send along a count/key with the message. The client says, hey I'm sending msg 50 and it's text is this. If the server already has that message, it just sends back "ok, I got it" and doens't insert into the DB again, eliminating dupes. So the client is free to keep retrying over and over until finally a call comes back from the server saying "Ok, I got it" and then it increments the key and moves on (or we keep them out of the chat if it fails enough). Anyway, so that's the background of what we're doing. The issue is, when we add this code on some versions of IE the threads start increasing gradually everytime it's accessed. On IE8 for Windows7 x64 it doesn't really seem to do it, but on IE8 for Windows Vista x86 it does. So I can't really pinpoint if it's a fluke or my code. Maybe someone had some ideas on a better way to do this. Here is some pseudo code: (the issue seems appear where I increment messageCount? Is this a scope thing, naming conflict, maybe the issue is entirely somewhere else and I'm way off base. public class SFChatClient implements EntryPoint { private List<String> messageQueue; private Integer messageCount = 0; public void onModuleLoad() { messageQueue = new ArrayList<String>(); // setup ui and what not // add a keyhandler to an input box that checks for <ENTER> and calls sendMEssage() } private void sendMessage() { // add message content to the UI for the chat messageQueue.add( //get message from user ); sendQueuedMessages(); } private void sendQueuedMessages() { if( messageQueue.size() > 0 ) { String outgoingMessage = messageQueue.get( 0 ); WebServiceClass.sendMessage( outgoingMessage, messageCount, new WebServiceHandler() { public void onSuccess() { // Delete item 0 from messageQueue messageCount = messageCount + 1; // <--- this seems to cause IE to leak threads. Taking out this code stops the issue??? sendQueuedMessages(); } public void onError() { // Do error handling sendQueuedMessages(); } } ); } } } public class WebServiceClass() { public void sendMessage( String message, Integer messageCount, handler ) { RequestBuilder builder = new RequestBuilder(// create request builder with proper params for the web service url, JSON content type, etc ) { public void onSuccess() { handler.onSuccess() } public void onError() { handler.onError() } } builder.setData( // JSON with message ); bulder.send(); } }

    Read the article

  • Opening a file with a variable as name and checking for undefined values

    - by Harm De Weirdt
    I'm having some problems writing data into a file using Perl. sub startNewOrder{ my $name = makeUniqueFileName(); open (ORDER, ">$name.txt") or die "can't open file: $!\n"; format ORDER_TOP = PRODUCT<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<CODE<<<<<<<<AANTAL<<<<EENHEIDSPRIJS<<<<<<TOTAAL<<<<<<< . format ORDER = @<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< @<<<<<<<< @<<<< @<<<<<< @<<<<< $title, $code, $amount, $price, $total . close (ORDER); } This is the sub I use to make the file. (I translated most of it.) The makeUniqueFileName method makes a fileName based upon current time("minuteshoursdayOrder"). The problem now is that I have to write to this file in another sub. sub addToOrder{ print "give productcode:"; $code = <STDIN>; chop $code; print "Give amount:"; $amount = <STDIN>; chop $amount; if($inventory{$code} eq undef){ #Does the product exist? print "This product does not exist"; }elsif($inventory{$code}[2] < $amount && !defined($inventaris{$code}[2]) ){ #Is there enough in the inventory? print "There is not enough in stock" }else{ $inventory{$code}[2] -= $amount; #write in order file open (ORDER ">>$naam.txt") or die "can't open file: $!\n"; $title = $inventory{$code}[0]; $code = $code; $amount = $inventory{$code}[2]; $price = $inventory{$code}[1]; $total = $inventory{$code}[1]; write; close(ORDER); } %inventory is a hashtable that has the productcode as key and an array with the title, price and amount as value. There are two problems here: when I enter an invalid product number, I still have to enter an amount even while my code says it should print the error directly after checking if there is a product with the given code. The second problem is that the writing doesn't seem to work. It always give's a "No such file or directory" error. Is there a way to open the ORDER file I made in the first sub without having to make $name not local? Or just a way to write in this file? I really don't know how to start here. I can't really find much info on writing a file that has been closed before, and in a different sub. Any help is appreciated, Harm

    Read the article

  • Do I must expose the aggregate children as public properties to implement the Persistence ignorance?

    - by xuehua
    Hi all, I'm very glad that i found this website recently, I've learned a lot from here. I'm from China, and my English is not so good. But i will try to express myself what i want to say. Recently, I've started learning about Domain Driven Design, and I'm very interested about it. And I plan to develop a Forum website using DDD. After reading lots of threads from here, I understood that persistence ignorance is a good practice. Currently, I have two questions about what I'm thinking for a long time. Should the domain object interact with repository to get/save data? If the domain object doesn't use repository, then how does the Infrastructure layer (like unit of work) know which domain object is new/modified/removed? For the second question. There's an example code: Suppose i have a user class: public class User { public Guid Id { get; set; } public string UserName { get; set; } public string NickName { get; set; } /// <summary> /// A Roles collection which represents the current user's owned roles. /// But here i don't want to use the public property to expose it. /// Instead, i use the below methods to implement. /// </summary> //public IList<Role> Roles { get; set; } private List<Role> roles = new List<Role>(); public IList<Role> GetRoles() { return roles; } public void AddRole(Role role) { roles.Add(role); } public void RemoveRole(Role role) { roles.Remove(role); } } Based on the above User class, suppose i get an user from the IUserRepository, and add an Role for it. IUserRepository userRepository; User user = userRepository.Get(Guid.NewGuid()); user.AddRole(new Role() { Name = "Administrator" }); In this case, i don't know how does the repository or unit of work can know that user has a new role? I think, a real persistence ignorance ORM framework should support POCO, and any changes occurs on the POCO itself, the persistence framework should know automatically. Even if change the object status through the method(AddRole, RemoveRole) like the above example. I know a lot of ORM can automatically persistent the changes if i use the Roles property, but sometimes i don't like this way because of the performance reason. Could anyone give me some ideas for this? Thanks. This is my first question on this site. I hope my English can be understood. Any answers will be very appreciated.

    Read the article

  • jQuery .find() doesn't return data in IE but does in Firefox and Chrome

    - by Steve Hiner
    I helped a friend out by doing a little web work for him. Part of what he needed was an easy way to change a couple pieces of text on his site. Rather than having him edit the HTML I decided to provide an XML file with the messages in it and I used jQuery to pull them out of the file and insert them into the page. It works great... In Firefox and Chrome, not so great in IE7. I was hoping one of you could tell me why. I did a fair but of googling but couldn't find what I'm looking for. Here's the XML: <?xml version="1.0" encoding="utf-8" ?> <messages> <message type="HeaderMessage"> This message is put up in the header area. </message> <message type="FooterMessage"> This message is put in the lower left cell. </message> </messages> And here's my jQuery call: <script type="text/javascript"> $(document).ready(function() { $.get('messages.xml', function(d) { //I have confirmed that it gets to here in IE //and it has the xml loaded. //alert(d); gives me a message box with the xml text in it //alert($(d).find('message')); gives me "[object Object]" //alert($(d).find('message')[0]); gives me "undefined" //alert($(d).find('message').Length); gives me "undefined" $(d).find('message').each(function() { //But it never gets to here in IE var $msg = $(this); var type = $msg.attr("type"); var message = $msg.text(); switch (type) { case "HeaderMessage": $("#HeaderMessageDiv").html(message); break; case "FooterMessage": $("#footermessagecell").html(message); break; default: } }); }); }); </script> Is there something I need to do differently in IE? Based on the message box with [object Object] I'm assumed that .find was working in IE but since I can't index into the array with [0] or check it's Length I'm guessing that means .find isn't returning any results. Any reason why that would work perfectly in Firefox and Chrome but fail in IE? I'm a total newbie with jQuery so I hope I haven't just done something stupid. That code above was scraped out of a forum and modified to suit my needs. Since jQuery is cross-platform I figured I wouldn't have to deal with this mess. Edit: I've found that if I load the page in Visual Studio 2008 and run it then it will work in IE. So it turns out it always works when run through the development web server. Now I'm thinking IE just doesn't like doing .find in XML loaded off of my local drive so maybe when this is on an actual web server it will work OK. I have confirmed that it works fine when browsed from a web server. Must be a peculiarity with IE. I'm guessing it's because the web server sets the mime type for the xml data file transfer and without that IE doesn't parse the xml correctly.

    Read the article

  • Opening a file with a variable as name & checking for undefined values

    - by Harm De Weirdt
    Hello everyone. I'm having some problems writing data into a file using perl. sub startNewOrder{ my $name = makeUniqueFileName(); open (ORDER, ">$name.txt") or die "can't open file: $!\n"; format ORDER_TOP = PRODUCT<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<CODE<<<<<<<<AANTAL<<<<EENHEIDSPRIJS<<<<<<TOTAAL<<<<<<< . format ORDER = @<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< @<<<<<<<< @<<<< @<<<<<< @<<<<< $title, $code, $amount, $price, $total . close (ORDER); } This is the sub I use to make the file. (I translated most of it) The makeUniqueFileName method makes a fileName based upon current time("minuteshoursdayOrder"). The problem now is that I have to write in this file in another sub. sub addToOrder{ print "give productcode:"; $code = <STDIN>; chop $code; print "Give amount:"; $amount = <STDIN>; chop $amount; if($inventory{$code} eq undef){ #Does the product exist? print "This product does not exist"; }elsif($inventory{$code}[2] < $amount && !defined($inventaris{$code}[2]) ){ #Is there enough in the inventory? print "There is not enough in stock" }else{ $inventory{$code}[2] -= $amount; #write in order file open (ORDER ">>$naam.txt") or die "can't open file: $!\n"; $title = $inventory{$code}[0]; $code = $code; $amount = $inventory{$code}[2]; $price = $inventory{$code}[1]; $total = $inventory{$code}[1]; write; close(ORDER); } %inventory is a hashtable that has the productcode as key and an array with the title, price and amount as value. There are two problems here: when I enter an invalid product number, I still have to enter an amount even while my code says it should print the error directly after checking if there is a product with the given code. The second problem is that the writing doesn't seem to work. It always give's a "No such file or directory" error. Is there a way to open the ORDER file i made in the first sub without having to make $name not local? Or just a way to write in this file? I really don't know how to start here. I can't really find much info on writing a file that has been closed before, and in a different sub.. Any help is appreciated, Harm

    Read the article

  • database design help for game / user levels / progress

    - by sprugman
    Sorry this got long and all prose-y. I'm creating my first truly gamified web app and could use some help thinking about how to structure the data. The Set-up Users need to accomplish tasks in each of several categories before they can move up a level. I've got my Users, Tasks, and Categories tables, and a UserTasks table which joins the three. ("User 3 has added Task 42 in Category 8. Now they've completed it.") That's all fine and working wonderfully. The Challenge I'm not sure of the best way to track the progress in the individual categories toward each level. The "business" rules are: You have to achieve a certain number of points in each category to move up. If you get the number of points needed in Cat 8, but still have other work to do to complete the level, any new Cat 8 points count toward your overall score, but don't "roll over" into the next level. The number of Categories is small (five currently) and unlikely to change often, but by no means absolutely fixed. The number of points needed to level-up will vary per level, probably by a formula, or perhaps a lookup table. So the challenge is to track each user's progress toward the next level in each category. I've thought of a few potential approaches: Possible Solutions Add a column to the users table for each category and reset them all to zero each time a user levels-up. Have a separate UserProgress table with a row for each category for each user and the number of points they have. (Basically a Many-to-Many version of #1.) Add a userLevel column to the UserTasks table and use that to derive their progress with some kind of SUM statement. Their current level will be a simple int in the User table. Pros & Cons (1) seems like by far the most straightforward, but it's also the least flexible. Perhaps I could use a naming convention based on the category ids to help overcome some of that. (With code like "select cats; for each cat, get the value from Users.progress_{cat.id}.") It's also the one where I lose the most data -- I won't know which points counted toward leveling up. I don't have a need in mind for that, so maybe I don't care about that. (2) seems complicated: every time I add or subtract a user or a category, I have to maintain the other table. I foresee synchronization challenges. (3) Is somewhere in between -- cleaner than #2, but less intuitive than #1. In order to find out where a user is, I'd have mildly complex SQL like: SELECT categoryId, SUM(points) from UserTasks WHERE userId={user.id} & countsTowardLevel={user.level} groupBy categoryId Hmm... that doesn't seem so bad. I think I'm talking myself into #3 here, but would love any input, advice or other ideas.

    Read the article

  • Qt: TableWidget's ItemAt() acting weirdly

    - by emredog
    Hi, i'm working on a windows application, where in a dialog i query some data from Postgres, and manually show the output in a table widget. m_ui->tableWidget->setRowCount(joinedData.count()); for(int i=0; i<joinedData.count(); i++) //for each row { m_ui->tableWidget->setItem(i, 0, new QTableWidgetItem(joinedData[i].bobin.referenceNumber)); m_ui->tableWidget->setItem(i, 1, new QTableWidgetItem(QString::number(joinedData[i].bobin.width))); m_ui->tableWidget->setItem(i, 2, new QTableWidgetItem(QString::number(joinedData[i].tolerance.getHole()))); m_ui->tableWidget->setItem(i, 3, new QTableWidgetItem(QString::number(joinedData[i].tolerance.getLessThanZeroFive()))); m_ui->tableWidget->setItem(i, 4, new QTableWidgetItem(QString::number(joinedData[i].tolerance.getZeroFive_to_zeroSeven()))); m_ui->tableWidget->setItem(i, 5, new QTableWidgetItem(QString::number(joinedData[i].tolerance.getZeroFive_to_zeroSeven_repetitive()))); m_ui->tableWidget->setItem(i, 6, new QTableWidgetItem(QString::number(joinedData[i].tolerance.getZeroSeven_to_Three()))); m_ui->tableWidget->setItem(i, 7, new QTableWidgetItem(QString::number(joinedData[i].tolerance.getThree_to_five()))); m_ui->tableWidget->setItem(i, 8, new QTableWidgetItem(QString::number(joinedData[i].tolerance.getMoreThanFive()))); } Also, based on row and column information, i paint some of these tablewidgetitems to some colors, but i don't think it's relevant. I reimplemented the QDialog's contextMenuEvent, to obtain the right clicked tableWidgetItem's row and column coordinates: void BobinFlanView::contextMenuEvent(QContextMenuEvent *event) { QMenu menu(m_ui->tableWidget); //standard actions menu.addAction(this->markInactiveAction); menu.addAction(this->markActiveAction); menu.addSeparator(); menu.addAction(this->exportAction); menu.addAction(this->exportAllAction); //obtain the rightClickedItem QTableWidgetItem* clickedItem = m_ui->tableWidget->itemAt(m_ui->tableWidget->mapFromGlobal(event->globalPos())); // if it's a colored one, add some more actions if (clickedItem && clickedItem->column()>1 && clickedItem->row()>0) { //this is a property, i'm keeping this for a later use this->lastRightClickedItem = clickedItem; //debug purpose: QMessageBox::information(this, "", QString("clickedItem = %1, %2").arg(clickedItem->row()).arg(clickedItem->column())); QMessageBox::information(this, "", QString("globalClick = %1, %2\ntransformedPos = %3, %4").arg(event->globalX()).arg(event->globalY()) .arg(m_ui->tableWidget->mapFromGlobal(event->globalPos()).x()).arg(m_ui->tableWidget->mapFromGlobal(event->globalPos()).y())); menu.addSeparator(); menu.addAction(this->changeSelectedToleranceToUygun); menu.addAction(this->changeSelectedToleranceToUyar); menu.addAction(this->changeSelectedToleranceToDurdurUyar); //... some other irrevelant 'enable/disable' activities menu.exec(event->globalPos()); } The problem is, when i right click on the same item i get the same global coordinates, but randomly different row-column information. For instance, the global pos is exactly 600,230 but row-column pair is randomly (5,3) and (4,3). I mean, what?! Also, when i click to an item from the last to rows (later than 13, i guess) will never go into condition "if (clickedItem && clickedItem-column()1 && clickedItem-row()0)", i think it's mainly because 'clickedItem' is null. I'll be more than glad to share any more information, or even the full cpp-h-ui trio in order to get help. Thanks a lot.

    Read the article

  • What is New in ASP.NET 4.0 Code Access Security

    - by Xiaohong
    ASP.NET Code Access Security (CAS) is a feature that helps protect server applications on hosting multiple Web sites, ASP.NET lets you assign a configurable trust level that corresponds to a predefined set of permissions. ASP.NET has predefined ASP.NET Trust Levels and Policy Files that you can assign to applications, you also can assign custom trust level and policy files. Most web hosting companies run ASP.NET applications in Medium Trust to prevent that one website affect or harm another site etc. As .NET Framework's Code Access Security model has evolved, ASP.NET 4.0 Code Access Security also has introduced several changes and improvements. The main change in ASP.NET 4.0 CAS In ASP.NET v4.0 partial trust applications, application domain can have a default partial trust permission set as opposed to being full-trust, the permission set name is defined in the <trust /> new attribute permissionSetName that is used to initialize the application domain . By default, the PermissionSetName attribute value is "ASP.Net" which is the name of the permission set you can find in all predefined partial trust configuration files. <trust level="Something" permissionSetName="ASP.Net" /> This is ASP.NET 4.0 new CAS model. For compatibility ASP.NET 4.0 also support legacy CAS model where application domain still has full trust permission set. You can specify new legacyCasModel attribute on the <trust /> element to indicate whether the legacy CAS model is enabled. By default legacyCasModel is false which means that new 4.0 CAS model is the default. <trust level="Something" legacyCasModel="true|false" /> In .Net FX 4.0 Config directory, there are two set of predefined partial trust config files for each new CAS model and legacy CAS model, trust config files with name legacy.XYZ.config are for legacy CAS model: New CAS model: Legacy CAS model: web_hightrust.config legacy.web_hightrust.config web_mediumtrust.config legacy.web_mediumtrust.config web_lowtrust.config legacy.web_lowtrust.config web_minimaltrust.config legacy.web_minimaltrust.config   The figure below shows in ASP.NET 4.0 new CAS model what permission set to grant to code for partial trust application using predefined partial trust levels and policy files:    There also some benefits that comes with the new CAS model: You can lock down a machine by making all managed code no-execute by default (e.g. setting the MyComputer zone to have no managed execution code permissions), it should still be possible to configure ASP.NET web applications to run as either full-trust or partial trust. UNC share doesn’t require full trust with CASPOL at machine-level CAS policy. Side effect that comes with the new CAS model: processRequestInApplicationTrust attribute is deprecated  in new CAS model since application domain always has partial trust permission set in new CAS model.   In ASP.NET 4.0 legacy CAS model or ASP.NET 2.0 CAS model, even though you assign partial trust level to a application but the application domain still has full trust permission set. The figure below shows in ASP.NET 4.0 legacy CAS model (or ASP.NET 2.0 CAS model) what permission set to grant to code for partial trust application using predefined partial trust levels and policy files:     What $AppDirUrl$, $CodeGen$, $Gac$ represents: $AppDirUrl$ The application's virtual root directory. This allows permissions to be applied to code that is located in the application's bin directory. For example, if a virtual directory is mapped to C:\YourWebApp, then $AppDirUrl$ would equate to C:\YourWebApp. $CodeGen$ The directory that contains dynamically generated assemblies (for example, the result of .aspx page compiles). This can be configured on a per application basis and defaults to %windir%\Microsoft.NET\Framework\{version}\Temporary ASP.NET Files. $CodeGen$ allows permissions to be applied to dynamically generated assemblies. $Gac$ Any assembly that is installed in the computer's global assembly cache (GAC). This allows permissions to be granted to strong named assemblies loaded from the GAC by the Web application.   The new customization of CAS Policy in ASP.NET 4.0 new CAS model 1. Define which named permission set in partial trust configuration files By default the permission set that will be assigned at application domain initialization time is the named "ASP.Net" permission set found in all predefined partial trust configuration files. However ASP.NET 4.0 allows you set PermissionSetName attribute to define which named permission set in a partial trust configuration file should be the one used to initialize an application domain. Example: add "ASP.Net_2" named permission set in partial trust configuration file: <PermissionSet class="NamedPermissionSet" version="1" Name="ASP.Net_2"> <IPermission class="FileIOPermission" version="1" Read="$AppDir$" PathDiscovery="$AppDir$" /> <IPermission class="ReflectionPermission" version="1" Flags ="RestrictedMemberAccess" /> <IPermission class="SecurityPermission " version="1" Flags ="Execution, ControlThread, ControlPrincipal, RemotingConfiguration" /></PermissionSet> Then you can use "ASP.Net_2" named permission set for the application domain permission set: <trust level="Something" legacyCasModel="false" permissionSetName="ASP.Net_2" /> 2. Define a custom set of Full Trust Assemblies for an application By using the new fullTrustAssemblies element to configure a set of Full Trust Assemblies for an application, you can modify set of partial trust assemblies to full trust at the machine, site or application level. The configuration definition is shown below: <fullTrustAssemblies> <add assemblyName="MyAssembly" version="1.1.2.3" publicKey="hex_char_representation_of_key_blob" /></fullTrustAssemblies> 3. Define <CodeGroup /> policy in partial trust configuration files ASP.NET 4.0 new CAS model will retain the ability for developers to optionally define <CodeGroup />with membership conditions and assigned permission sets. The specific restriction in ASP.NET 4.0 new CAS model though will be that the results of evaluating custom policies can only result in one of two outcomes: either an assembly is granted full trust, or an assembly is granted the partial trust permission set currently associated with the running application domain. It will not be possible to use custom policies to create additional custom partial trust permission sets. When parsing the partial trust configuration file: Any assemblies that match to code groups associated with "PermissionSet='FullTrust'" will run at full trust. Any assemblies that match to code groups associated with "PermissionSet='Nothing'" will result in a PolicyError being thrown from the CLR. This is acceptable since it provides administrators with a way to do a blanket-deny of managed code followed by selectively defining policy in a <CodeGroup /> that re-adds assemblies that would be allowed to run. Any assemblies that match to code groups associated with other permissions sets will be interpreted to mean the assembly should run at the permission set of the appdomain. This means that even though syntactically a developer could define additional "flavors" of partial trust in an ASP.NET partial trust configuration file, those "flavors" will always be ignored. Example: defines full trust in <CodeGroup /> for my strong named assemblies in partial trust config files: <CodeGroup class="FirstMatchCodeGroup" version="1" PermissionSetName="Nothing"> <IMembershipCondition    class="AllMembershipCondition"    version="1" /> <CodeGroup    class="UnionCodeGroup"    version="1"    PermissionSetName="FullTrust"    Name="My_Strong_Name"    Description="This code group grants code signed full trust. "> <IMembershipCondition      class="StrongNameMembershipCondition" version="1"       PublicKeyBlob="hex_char_representation_of_key_blob" /> </CodeGroup> <CodeGroup   class="UnionCodeGroup" version="1" PermissionSetName="ASP.Net">   <IMembershipCondition class="UrlMembershipCondition" version="1" Url="$AppDirUrl$/*" /> </CodeGroup> <CodeGroup class="UnionCodeGroup" version="1" PermissionSetName="ASP.Net">   <IMembershipCondition class="UrlMembershipCondition" version="1" Url="$CodeGen$/*"   /> </CodeGroup></CodeGroup>   4. Customize CAS policy at runtime in ASP.NET 4.0 new CAS model ASP.NET 4.0 new CAS model allows to customize CAS policy at runtime by using custom HostSecurityPolicyResolver that overrides the ASP.NET code access security policy. Example: use custom host security policy resolver to resolve partial trust web application bin folder MyTrustedAssembly.dll to full trust at runtime: You can create a custom host security policy resolver and compile it to assembly MyCustomResolver.dll with strong name enabled and deploy in GAC: public class MyCustomResolver : HostSecurityPolicyResolver{ public override HostSecurityPolicyResults ResolvePolicy(Evidence evidence) { IEnumerator hostEvidence = evidence.GetHostEnumerator(); while (hostEvidence.MoveNext()) { object hostEvidenceObject = hostEvidence.Current; if (hostEvidenceObject is System.Security.Policy.Url) { string assemblyName = hostEvidenceObject.ToString(); if (assemblyName.Contains(“MyTrustedAssembly.dll”) return HostSecurityPolicyResult.FullTrust; } } //default fall-through return HostSecurityPolicyResult.DefaultPolicy; }} Because ASP.NET accesses the custom HostSecurityPolicyResolver during application domain initialization, and a custom policy resolver requires full trust, you also can add a custom policy resolver in <fullTrustAssemblies /> , or deploy in the GAC. You also need configure a custom HostSecurityPolicyResolver instance by adding the HostSecurityPolicyResolverType attribute in the <trust /> element: <trust level="Something" legacyCasModel="false" hostSecurityPolicyResolverType="MyCustomResolver, MyCustomResolver" permissionSetName="ASP.Net" />   Note: If an assembly policy define in <CodeGroup/> and also in hostSecurityPolicyResolverType, hostSecurityPolicyResolverType will win. If an assembly added in <fullTrustAssemblies/> then the assembly has full trust no matter what policy in <CodeGroup/> or in hostSecurityPolicyResolverType.   Other changes in ASP.NET 4.0 CAS Use the new transparency model introduced in .Net Framework 4.0 Change in dynamically compiled code generated assemblies by ASP.NET: In new CAS model they will be marked as security transparent level2 to use Framework 4.0 security transparent rule that means partial trust code is treated as completely Transparent and it is more strict enforcement. In legacy CAS model they will be marked as security transparent level1 to use Framework 2.0 security transparent rule for compatibility. Most of ASP.NET products runtime assemblies are also changed to be marked as security transparent level2 to switch to SecurityTransparent code by default unless SecurityCritical or SecuritySafeCritical attribute specified. You also can look at Security Changes in the .NET Framework 4 for more information about these security attributes. Support conditional APTCA If an assembly is marked with the Conditional APTCA attribute to allow partially trusted callers, and if you want to make the assembly both visible and accessible to partial-trust code in your web application, you must add a reference to the assembly in the partialTrustVisibleAssemblies section: <partialTrustVisibleAssemblies> <add assemblyName="MyAssembly" publicKey="hex_char_representation_of_key_blob" />/partialTrustVisibleAssemblies>   Most of ASP.NET products runtime assemblies are also changed to be marked as conditional APTCA to prevent use of ASP.NET APIs in partial trust environments such as Winforms or WPF UI controls hosted in Internet Explorer.   Differences between ASP.NET new CAS model and legacy CAS model: Here list some differences between ASP.NET new CAS model and legacy CAS model ASP.NET 4.0 legacy CAS model  : Asp.net partial trust appdomains have full trust permission Multiple different permission sets in a single appdomain are allowed in ASP.NET partial trust configuration files Code groups Machine CAS policy is honored processRequestInApplicationTrust attribute is still honored    New configuration setting for legacy model: <trust level="Something" legacyCASModel="true" ></trust><partialTrustVisibleAssemblies> <add assemblyName="MyAssembly" publicKey="hex_char_representation_of_key_blob" /></partialTrustVisibleAssemblies>   ASP.NET 4.0 new CAS model: ASP.NET will now run in homogeneous application domains. Only full trust or the app-domain's partial trust grant set, are allowable permission sets. It is no longer possible to define arbitrary permission sets that get assigned to different assemblies. If an application currently depends on fine-tuning the partial trust permission set using the ASP.NET partial trust configuration file, this will no longer be possible. processRequestInApplicationTrust attribute is deprecated Dynamically compiled assemblies output by ASP.NET build providers will be updated to explicitly mark assemblies as transparent. ASP.NET partial trust grant sets will be independent from any enterprise, machine, or user CAS policy levels. A simplified model for locking down web servers that only allows trusted managed web applications to run. Machine policy used to always grant full-trust to managed code (based on membership conditions) can instead be configured using the new ASP.NET 4.0 full-trust assembly configuration section. The full-trust assembly configuration section requires explicitly listing each assembly as opposed to using membership conditions. Alternatively, the membership condition(s) used in machine policy can instead be re-defined in a <CodeGroup /> within ASP.NET's partial trust configuration file to grant full-trust.   New configuration setting for new model: <trust level="Something" legacyCASModel="false" permissionSetName="ASP.Net" hostSecurityPolicyResolverType=".NET type string" ></trust><fullTrustAssemblies> <add assemblyName=”MyAssembly” version=”1.0.0.0” publicKey="hex_char_representation_of_key_blob" /></fullTrustAssemblies><partialTrustVisibleAssemblies> <add assemblyName="MyAssembly" publicKey="hex_char_representation_of_key_blob" /></partialTrustVisibleAssemblies>     Hope this post is helpful to better understand the ASP.Net 4.0 CAS. Xiaohong Tang ASP.NET QA Team

    Read the article

  • Configuring UCM cache to check for external Content Server changes

    - by Martin Deh
    Recently, I was involved in a customer scenario where they were modifying the Content Server's contributor data files directly through Content Server.  This operation of course is completely supported.  However, since the contributor data file was modified through the "backdoor", a running WebCenter Spaces page, which also used the same data file, would not get the updates immediately.  This was due to two reasons.  The first reason is that the Spaces page was using Content Presenter to display the contents of the data file. The second reason is that the Spaces application was using the "cached" version of the data file.  Fortunately, there is a way to configure cache so backdoor changes can be picked up more quickly and automatically. First a brief overview of Content Presenter.  The Content Presenter task flow enables WebCenter Spaces users with Page-Edit permissions to precisely customize the selection and presentation of content in a WebCenter Spaces application.  With Content Presenter, you can select a single item of content, contents under a folder, a list of items, or query for content, and then select a Content Presenter based template to render the content on a page in a Spaces application.  In addition to displaying the folders and the files in a Content Server, Content Presenter integrates with Oracle Site Studio to allow you to create, access, edit, and display Site Studio contributor data files (Content Server Document) in either a Site Studio region template or in a custom Content Presenter display template.  More information about creating Content Presenter Display Template can be found in the OFM Developers Guide for WebCenter Portal. The easiest way to configure the cache is to modify the WebCenter Spaces Content Server service connection setting through Enterprise Manager.  From here, under the Cache Details, there is a section to set the Cache Invalidation Interval.  Basically, this enables the cache to be monitored by the cache "sweeper" utility.  The cache sweeper queries for changes in the Content Server, and then "marks" the object in cache as "dirty".  This causes the application in turn to get a new copy of the document from the Content Server that replaces the cached version.  By default the initial value for the Cache Invalidation Interval is set to 0 (minutes).  This basically means that the sweeper is OFF.  To turn the sweeper ON, just set a value (in minutes).  The mininal value that can be set is 2 (minutes): Just a note.  In some instances, once the value of the Cache Invalidation Interval has been set (and saved) in the Enterprise Manager UI, it becomes "sticky" and the interval value cannot be set back to 0.  The good news is that this value can also be updated throught a WLST command.   The WLST command to run is as follows: setJCRContentServerConnection(appName, name, [socketType, url, serverHost, serverPort, keystoreLocation, keystorePassword, privateKeyAlias, privateKeyPassword, webContextRoot, clientSecurityPolicy, cacheInvalidationInterval, binaryCacheMaxEntrySize, adminUsername, adminPassword, extAppId, timeout, isPrimary, server, applicationVersion]) One way to get the required information for executing the command is to use the listJCRContentServerConnections('webcenter',verbose=true) command.  For example, this is the sample output from the execution: ------------------ UCM ------------------ Connection Name: UCM Connection Type: JCR External Appliction ID: Timeout: (not set) CIS Socket Type: socket CIS Server Hostname: webcenter.oracle.local CIS Server Port: 4444 CIS Keystore Location: CIS Private Key Alias: CIS Web URL: Web Server Context Root: /cs Client Security Policy: Admin User Name: sysadmin Cache Invalidation Interval: 2 Binary Cache Maximum Entry Size: 1024 The Documents primary connection is "UCM" From this information, the completed  setJCRContentServerConnection would be: setJCRContentServerConnection(appName='webcenter',name='UCM', socketType='socket', serverHost='webcenter.oracle.local', serverPort='4444', webContextRoot='/cs', cacheInvalidationInterval='0', binaryCacheMaxEntrySize='1024',adminUsername='sysadmin',isPrimary=1) Note: The Spaces managed server must be restarted for the change to take effect. More information about using WLST for WebCenter can be found here. Once the sweeper is turned ON, only cache objects that have been changed will be invalidated.  To test this out, I will go through a simple scenario.  The first thing to do is configure the Content Server so it can monitor and report on events.  Log into the Content Server console application, and under the Administration menu item, select System Audit Information.  Note: If your console is using the left menu display option, the Administration link will be located there. Under the Tracing Sections Information, add in only "system" and "requestaudit" in the Active Sections.  Check Full Verbose Tracing, check Save, then click the Update button.  Once this is done, select the View Server Output menu option.  This will change the browser view to display the log.  This is all that is needed to configure the Content Server. For example, the following is the View Server Output with the cache invalidation interval set to 2(minutes) Note the time stamp: requestaudit/6 08.30 09:52:26.001  IdcServer-68    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.016933999955654144(secs) requestaudit/6 08.30 09:52:26.010  IdcServer-69    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.006134999915957451(secs) requestaudit/6 08.30 09:52:26.014  IdcServer-70    GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.004271999932825565(secs) ... other trace info ... requestaudit/6 08.30 09:54:26.002  IdcServer-71    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.020323999226093292(secs) requestaudit/6 08.30 09:54:26.011  IdcServer-72    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.017928000539541245(secs) requestaudit/6 08.30 09:54:26.017  IdcServer-73    GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.010185999795794487(secs) Now that the tracing logs are reporting correctly, the next step is set up the Spaces app to test the sweeper. I will use 2 different pages that will use Content Presenter task flows.  Each task flow will use a different custom Content Presenter display template, and will be assign 2 different contributor data files (document that will be in the cache).  The pages at run time appear as follows: Initially, when the Space pages containing the content is loaded in the browser for the first time, you can see the tracing information in the Content Server output viewer. requestaudit/6 08.30 11:51:12.030 IdcServer-129 CLEAR_SERVER_OUTPUT [dUser=weblogic] 0.029171999543905258(secs) requestaudit/6 08.30 11:51:12.101 IdcServer-130 GET_SERVER_OUTPUT [dUser=weblogic] 0.025721000507473946(secs) requestaudit/6 08.30 11:51:26.592 IdcServer-131 VCR_GET_DOCUMENT_BY_NAME [dID=919][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][RevisionSelectionMethod=LatestReleased][IsJava=1] 0.21525299549102783(secs) requestaudit/6 08.30 11:51:27.117 IdcServer-132 VCR_GET_CONTENT_TYPES [dUser=sysadmin][IsJava=1] 0.5059549808502197(secs) requestaudit/6 08.30 11:51:27.146 IdcServer-133 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.03360399976372719(secs) requestaudit/6 08.30 11:51:27.169 IdcServer-134 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.008806000463664532(secs) requestaudit/6 08.30 11:51:27.204 IdcServer-135 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.013265999965369701(secs) requestaudit/6 08.30 11:51:27.384 IdcServer-136 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.18119299411773682(secs) requestaudit/6 08.30 11:51:27.533 IdcServer-137 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.1519480049610138(secs) requestaudit/6 08.30 11:51:27.634 IdcServer-138 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.10827399790287018(secs) requestaudit/6 08.30 11:51:27.687 IdcServer-139 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.059702999889850616(secs) requestaudit/6 08.30 11:51:28.271 IdcServer-140 GET_USER_PERMISSIONS [dUser=weblogic][IsJava=1] 0.006703000050038099(secs) requestaudit/6 08.30 11:51:28.285 IdcServer-141 GET_ENVIRONMENT [dUser=sysadmin][IsJava=1] 0.010893999598920345(secs) requestaudit/6 08.30 11:51:30.433 IdcServer-142 GET_SERVER_OUTPUT [dUser=weblogic] 0.017318999394774437(secs) requestaudit/6 08.30 11:51:41.837 IdcServer-143 VCR_GET_DOCUMENT_BY_NAME [dID=508][dDocName=113_ES][dDocTitle=Landing Home][dUser=weblogic][RevisionSelectionMethod=LatestReleased][IsJava=1] 0.15937699377536774(secs) requestaudit/6 08.30 11:51:42.781 IdcServer-144 GET_FILE [dID=326][dDocName=WEBCENTERORACL000315][dDocTitle=Duke][dUser=anonymous][RevisionSelectionMethod=LatestReleased][dSecurityGroup=Public][xCollectionID=0] 0.16288499534130096(secs) The highlighted sections show where the 2 data files DF_UCMCACHETESTER (P1 page) and 113_ES (P2 page) were called by the (Spaces) VCR connection to the Content Server. The most important line to notice is the VCR_GET_DOCUMENT_BY_NAME invocation.  On subsequent refreshes of these 2 pages, you will notice (after you refresh the Content Server's View Server Output) that there are no further traces of the same VCR_GET_DOCUMENT_BY_NAME invocations.  This is because the pages are getting the documents from the cache. The next step is to go through the "backdoor" and change one of the documents through the Content Server console.  This operation can be done by first locating the data file document, and from the Content Information page, select Edit Data File menu option.   This invokes the Site Studio Contributor, where the modifications can be made. Refreshing the Content Server View Server Output, the tracing displays the operations perform on the document.  requestaudit/6 08.30 11:56:59.972 IdcServer-255 SS_CHECKOUT_BY_NAME [dID=922][dDocName=DF_UCMCACHETESTER][dUser=weblogic][dSecurityGroup=Public] 0.05558200180530548(secs) requestaudit/6 08.30 11:57:00.065 IdcServer-256 SS_GET_CONTRIBUTOR_CONFIG [dID=922][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0] 0.08632399886846542(secs) requestaudit/6 08.30 11:57:00.470 IdcServer-259 DOC_INFO_BY_NAME [dID=922][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0] 0.02268899977207184(secs) requestaudit/6 08.30 11:57:10.177 IdcServer-264 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.007652000058442354(secs) requestaudit/6 08.30 11:57:10.181 IdcServer-263 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.01868399977684021(secs) requestaudit/6 08.30 11:57:10.187 IdcServer-265 GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.009367000311613083(secs) (internal)/6 08.30 11:57:26.118 IdcServer-266 File to be removed: /oracle/app/admin/domains/webcenter/ucm/cs/vault/~temp/703253295.xml (internal)/6 08.30 11:57:26.121 IdcServer-266 File to be removed: /oracle/app/admin/domains/webcenter/ucm/cs/vault/~temp/703253295.xml requestaudit/6 08.30 11:57:26.122 IdcServer-266 SS_SET_ELEMENT_DATA [dID=923][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0][StatusCode=0][StatusMessage=Successfully checked in content item 'DF_UCMCACHETESTER'.] 0.3765290081501007(secs) requestaudit/6 08.30 11:57:30.710 IdcServer-267 DOC_INFO_BY_NAME [dID=923][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0] 0.07942699640989304(secs) requestaudit/6 08.30 11:57:30.733 IdcServer-268 SS_GET_CONTRIBUTOR_STRINGS [dUser=weblogic] 0.0044570001773536205(secs) After a few moments and refreshing the P1 page, the updates has been applied. Note: The refresh time may very, since the Cache Invalidation Interval (set to 2 minutes) is not determined by when changes happened.  The sweeper just runs every 2 minutes. Refreshing the Content Server View Server Output, the tracing displays the important information. requestaudit/6 08.30 11:59:10.171 IdcServer-270 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.00952600035816431(secs) requestaudit/6 08.30 11:59:10.179 IdcServer-271 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.011118999682366848(secs) requestaudit/6 08.30 11:59:10.182 IdcServer-272 GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.007447000127285719(secs) requestaudit/6 08.30 11:59:16.885 IdcServer-273 VCR_GET_DOCUMENT_BY_NAME [dID=923][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][RevisionSelectionMethod=LatestReleased][IsJava=1] 0.0786449983716011(secs) After the specifed interval time the sweeper is invoked, which is noted by the GET_ ... calls.  Since the history has noted the change, the next call is to the VCR_GET_DOCUMENT_BY_NAME to retrieve the new version of the (modifed) data file.  Navigating back to the P2 page, and viewing the server output, there are no further VCR_GET_DOCUMENT_BY_NAME to retrieve the data file.  This simply means that this data file was just retrieved from the cache.   Upon further review of the server output, we can see that there was only 1 request for the VCR_GET_DOCUMENT_BY_NAME: requestaudit/6 08.30 12:08:00.021 Audit Request Monitor Request Audit Report over the last 120 Seconds for server webcenteroraclelocal16200****  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor -Num Requests 8 Errors 0 Reqs/sec. 0.06666944175958633 Avg. Latency (secs) 0.02762500010430813 Max Thread Count 2  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 1 Service VCR_GET_DOCUMENT_BY_NAME Total Elapsed Time (secs) 0.09200000017881393 Num requests 1 Num errors 0 Avg. Latency (secs) 0.09200000017881393  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 2 Service GET_PERSONALIZED_JAVASCRIPT Total Elapsed Time (secs) 0.054999999701976776 Num requests 1 Num errors 0 Avg. Latency (secs) 0.054999999701976776  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 3 Service GET_FOLDER_HISTORY_REPORT Total Elapsed Time (secs) 0.028999999165534973 Num requests 2 Num errors 0 Avg. Latency (secs) 0.014499999582767487  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 4 Service GET_SERVER_OUTPUT Total Elapsed Time (secs) 0.017999999225139618 Num requests 1 Num errors 0 Avg. Latency (secs) 0.017999999225139618  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 5 Service GET_FILE Total Elapsed Time (secs) 0.013000000268220901 Num requests 1 Num errors 0 Avg. Latency (secs) 0.013000000268220901  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor ****End Audit Report*****  

    Read the article

  • HTG Reviews the CODE Keyboard: Old School Construction Meets Modern Amenities

    - by Jason Fitzpatrick
    There’s nothing quite as satisfying as the smooth and crisp action of a well built keyboard. If you’re tired of  mushy keys and cheap feeling keyboards, a well-constructed mechanical keyboard is a welcome respite from the $10 keyboard that came with your computer. Read on as we put the CODE mechanical keyboard through the paces. What is the CODE Keyboard? The CODE keyboard is a collaboration between manufacturer WASD Keyboards and Jeff Atwood of Coding Horror (the guy behind the Stack Exchange network and Discourse forum software). Atwood’s focus was incorporating the best of traditional mechanical keyboards and the best of modern keyboard usability improvements. In his own words: The world is awash in terrible, crappy, no name how-cheap-can-we-make-it keyboards. There are a few dozen better mechanical keyboard options out there. I’ve owned and used at least six different expensive mechanical keyboards, but I wasn’t satisfied with any of them, either: they didn’t have backlighting, were ugly, had terrible design, or were missing basic functions like media keys. That’s why I originally contacted Weyman Kwong of WASD Keyboards way back in early 2012. I told him that the state of keyboards was unacceptable to me as a geek, and I proposed a partnership wherein I was willing to work with him to do whatever it takes to produce a truly great mechanical keyboard. Even the ardent skeptic who questions whether Atwood has indeed created a truly great mechanical keyboard certainly can’t argue with the position he starts from: there are so many agonizingly crappy keyboards out there. Even worse, in our opinion, is that unless you’re a typist of a certain vintage there’s a good chance you’ve never actually typed on a really nice keyboard. Those that didn’t start using computers until the mid-to-late 1990s most likely have always typed on modern mushy-key keyboards and never known the joy of typing on a really responsive and crisp mechanical keyboard. Is our preference for and love of mechanical keyboards shining through here? Good. We’re not even going to try and hide it. So where does the CODE keyboard stack up in pantheon of keyboards? Read on as we walk you through the simple setup and our experience using the CODE. Setting Up the CODE Keyboard Although the setup of the CODE keyboard is essentially plug and play, there are two distinct setup steps that you likely haven’t had to perform on a previous keyboard. Both highlight the degree of care put into the keyboard and the amount of customization available. Inside the box you’ll find the keyboard, a micro USB cable, a USB-to-PS2 adapter, and a tool which you may be unfamiliar with: a key puller. We’ll return to the key puller in a moment. Unlike the majority of keyboards on the market, the cord isn’t permanently affixed to the keyboard. What does this mean for you? Aside from the obvious need to plug it in yourself, it makes it dead simple to repair your own keyboard cord if it gets attacked by a pet, mangled in a mechanism on your desk, or otherwise damaged. It also makes it easy to take advantage of the cable routing channels in on the underside of the keyboard to  route your cable exactly where you want it. While we’re staring at the underside of the keyboard, check out those beefy rubber feet. By peripherals standards they’re huge (and there is six instead of the usual four). Once you plunk the keyboard down where you want it, it might as well be glued down the rubber feet work so well. After you’ve secured the cable and adjusted it to your liking, there is one more task  before plug the keyboard into the computer. On the bottom left-hand side of the keyboard, you’ll find a small recess in the plastic with some dip switches inside: The dip switches are there to switch hardware functions for various operating systems, keyboard layouts, and to enable/disable function keys. By toggling the dip switches you can change the keyboard from QWERTY mode to Dvorak mode and Colemak mode, the two most popular alternative keyboard configurations. You can also use the switches to enable Mac-functionality (for Command/Option keys). One of our favorite little toggles is the SW3 dip switch: you can disable the Caps Lock key; goodbye accidentally pressing Caps when you mean to press Shift. You can review the entire dip switch configuration chart here. The quick-start for Windows users is simple: double check that all the switches are in the off position (as seen in the photo above) and then simply toggle SW6 on to enable the media and backlighting function keys (this turns the menu key on the keyboard into a function key as typically found on laptop keyboards). After adjusting the dip switches to your liking, plug the keyboard into an open USB port on your computer (or into your PS/2 port using the included adapter). Design, Layout, and Backlighting The CODE keyboard comes in two flavors, a traditional 87-key layout (no number pad) and a traditional 104-key layout (number pad on the right hand side). We identify the layout as traditional because, despite some modern trapping and sneaky shortcuts, the actual form factor of the keyboard from the shape of the keys to the spacing and position is as classic as it comes. You won’t have to learn a new keyboard layout and spend weeks conditioning yourself to a smaller than normal backspace key or a PgUp/PgDn pair in an unconventional location. Just because the keyboard is very conventional in layout, however, doesn’t mean you’ll be missing modern amenities like media-control keys. The following additional functions are hidden in the F11, F12, Pause button, and the 2×6 grid formed by the Insert and Delete rows: keyboard illumination brightness, keyboard illumination on/off, mute, and then the typical play/pause, forward/backward, stop, and volume +/- in Insert and Delete rows, respectively. While we weren’t sure what we’d think of the function-key system at first (especially after retiring a Microsoft Sidewinder keyboard with a huge and easily accessible volume knob on it), it took less than a day for us to adapt to using the Fn key, located next to the right Ctrl key, to adjust our media playback on the fly. Keyboard backlighting is a largely hit-or-miss undertaking but the CODE keyboard nails it. Not only does it have pleasant and easily adjustable through-the-keys lighting but the key switches the keys themselves are attached to are mounted to a steel plate with white paint. Enough of the light reflects off the interior cavity of the keys and then diffuses across the white plate to provide nice even illumination in between the keys. Highlighting the steel plate beneath the keys brings us to the actual construction of the keyboard. It’s rock solid. The 87-key model, the one we tested, is 2.0 pounds. The 104-key is nearly a half pound heavier at 2.42 pounds. Between the steel plate, the extra-thick PCB board beneath the steel plate, and the thick ABS plastic housing, the keyboard has very solid feel to it. Combine that heft with the previously mentioned thick rubber feet and you have a tank-like keyboard that won’t budge a millimeter during normal use. Examining The Keys This is the section of the review the hardcore typists and keyboard ninjas have been waiting for. We’ve looked at the layout of the keyboard, we’ve looked at the general construction of it, but what about the actual keys? There are a wide variety of keyboard construction techniques but the vast majority of modern keyboards use a rubber-dome construction. The key is floated in a plastic frame over a rubber membrane that has a little rubber dome for each key. The press of the physical key compresses the rubber dome downwards and a little bit of conductive material on the inside of the dome’s apex connects with the circuit board. Despite the near ubiquity of the design, many people dislike it. The principal complaint is that dome keyboards require a complete compression to register a keystroke; keyboard designers and enthusiasts refer to this as “bottoming out”. In other words, the register the “b” key, you need to completely press that key down. As such it slows you down and requires additional pressure and movement that, over the course of tens of thousands of keystrokes, adds up to a whole lot of wasted time and fatigue. The CODE keyboard features key switches manufactured by Cherry, a company that has manufactured key switches since the 1960s. Specifically the CODE features Cherry MX Clear switches. These switches feature the same classic design of the other Cherry switches (such as the MX Blue and Brown switch lineups) but they are significantly quieter (yes this is a mechanical keyboard, but no, your neighbors won’t think you’re firing off a machine gun) as they lack the audible click found in most Cherry switches. This isn’t to say that they keyboard doesn’t have a nice audible key press sound when the key is fully depressed, but that the key mechanism isn’t doesn’t create a loud click sound when triggered. One of the great features of the Cherry MX clear is a tactile “bump” that indicates the key has been compressed enough to register the stroke. For touch typists the very subtle tactile feedback is a great indicator that you can move on to the next stroke and provides a welcome speed boost. Even if you’re not trying to break any word-per-minute records, that little bump when pressing the key is satisfying. The Cherry key switches, in addition to providing a much more pleasant typing experience, are also significantly more durable than dome-style key switch. Rubber dome switch membrane keyboards are typically rated for 5-10 million contacts whereas the Cherry mechanical switches are rated for 50 million contacts. You’d have to write the next War and Peace  and follow that up with A Tale of Two Cities: Zombie Edition, and then turn around and transcribe them both into a dozen different languages to even begin putting a tiny dent in the lifecycle of this keyboard. So what do the switches look like under the classicly styled keys? You can take a look yourself with the included key puller. Slide the loop between the keys and then gently beneath the key you wish to remove: Wiggle the key puller gently back and forth while exerting a gentle upward pressure to pop the key off; You can repeat the process for every key, if you ever find yourself needing to extract piles of cat hair, Cheeto dust, or other foreign objects from your keyboard. There it is, the naked switch, the source of that wonderful crisp action with the tactile bump on each keystroke. The last feature worthy of a mention is the N-key rollover functionality of the keyboard. This is a feature you simply won’t find on non-mechanical keyboards and even gaming keyboards typically only have any sort of key roller on the high-frequency keys like WASD. So what is N-key rollover and why do you care? On a typical mass-produced rubber-dome keyboard you cannot simultaneously press more than two keys as the third one doesn’t register. PS/2 keyboards allow for unlimited rollover (in other words you can’t out type the keyboard as all of your keystrokes, no matter how fast, will register); if you use the CODE keyboard with the PS/2 adapter you gain this ability. If you don’t use the PS/2 adapter and use the native USB, you still get 6-key rollover (and the CTRL, ALT, and SHIFT don’t count towards the 6) so realistically you still won’t be able to out type the computer as even the more finger twisting keyboard combos and high speed typing will still fall well within the 6-key rollover. The rollover absolutely doesn’t matter if you’re a slow hunt-and-peck typist, but if you’ve read this far into a keyboard review there’s a good chance that you’re a serious typist and that kind of quality construction and high-number key rollover is a fantastic feature.  The Good, The Bad, and the Verdict We’ve put the CODE keyboard through the paces, we’ve played games with it, typed articles with it, left lengthy comments on Reddit, and otherwise used and abused it like we would any other keyboard. The Good: The construction is rock solid. In an emergency, we’re confident we could use the keyboard as a blunt weapon (and then resume using it later in the day with no ill effect on the keyboard). The Cherry switches are an absolute pleasure to type on; the Clear variety found in the CODE keyboard offer a really nice middle-ground between the gun-shot clack of a louder mechanical switch and the quietness of a lesser-quality dome keyboard without sacrificing quality. Touch typists will love the subtle tactile bump feedback. Dip switch system makes it very easy for users on different systems and with different keyboard layout needs to switch between operating system and keyboard layouts. If you’re investing a chunk of change in a keyboard it’s nice to know you can take it with you to a different operating system or “upgrade” it to a new layout if you decide to take up Dvorak-style typing. The backlighting is perfect. You can adjust it from a barely-visible glow to a blazing light-up-the-room brightness. Whatever your intesity preference, the white-coated steel backplate does a great job diffusing the light between the keys. You can easily remove the keys for cleaning (or to rearrange the letters to support a new keyboard layout). The weight of the unit combined with the extra thick rubber feet keep it planted exactly where you place it on the desk. The Bad: While you’re getting your money’s worth, the $150 price tag is a shock when compared to the $20-60 price tags you find on lower-end keyboards. People used to large dedicated media keys independent of the traditional key layout (such as the large buttons and volume controls found on many modern keyboards) might be off put by the Fn-key style media controls on the CODE. The Verdict: The keyboard is clearly and heavily influenced by the needs of serious typists. Whether you’re a programmer, transcriptionist, or just somebody that wants to leave the lengthiest article comments the Internet has ever seen, the CODE keyboard offers a rock solid typing experience. Yes, $150 isn’t pocket change, but the quality of the CODE keyboard is so high and the typing experience is so enjoyable, you’re easily getting ten times the value you’d get out of purchasing a lesser keyboard. Even compared to other mechanical keyboards on the market, like the Das Keyboard, you’re still getting more for your money as other mechanical keyboards don’t come with the lovely-to-type-on Cherry MX Clear switches, back lighting, and hardware-based operating system keyboard layout switching. If it’s in your budget to upgrade your keyboard (especially if you’ve been slogging along with a low-end rubber-dome keyboard) there’s no good reason to not pickup a CODE keyboard. Key animation courtesy of Geekhack.org user Lethal Squirrel.       

    Read the article

  • Use IIS Application Initialization for keeping ASP.NET Apps alive

    - by Rick Strahl
    I've been working quite a bit with Windows Services in the recent months, and well, it turns out that Windows Services are quite a bear to debug, deploy, update and maintain. The process of getting services set up,  debugged and updated is a major chore that has to be extensively documented and or automated specifically. On most projects when a service is built, people end up scrambling for the right 'process' to use for administration. Web app deployment and maintenance on the other hand are common and well understood today, as we are constantly dealing with Web apps. There's plenty of infrastructure and tooling built into Web Tools like Visual Studio to facilitate the process. By comparison Windows Services or anything self-hosted for that matter seems convoluted.In fact, in a recent blog post I mentioned that on a recent project I'd been using self-hosting for SignalR inside of a Windows service, because the application is in fact a 'service' that also needs to send out lots of messages via SignalR. But the reality is that it could just as well be an IIS application with a service component that runs in the background. Either way you look at it, it's either a Windows Service with a built in Web Server, or an IIS application running a Service application, neither of which follows the standard Service or Web App template.Personally I much prefer Web applications. Running inside of IIS I get all the benefits of the IIS platform including service lifetime management (crash and restart), controlled shutdowns, the whole security infrastructure including easy certificate support, hot-swapping of code and the the ability to publish directly to IIS from within Visual Studio with ease.Because of these benefits we set out to move from the self hosted service into an ASP.NET Web app instead.The Missing Link for ASP.NET as a Service: Auto-LoadingI've had moments in the past where I wanted to run a 'service like' application in ASP.NET because when you think about it, it's so much easier to control a Web application remotely. Services are locked into start/stop operations, but if you host inside of a Web app you can write your own ticket and control it from anywhere. In fact nearly 10 years ago I built a background scheduling application that ran inside of ASP.NET and it worked great and it's still running doing its job today.The tricky part for running an app as a service inside of IIS then and now, is how to get IIS and ASP.NET launched so your 'service' stays alive even after an Application Pool reset. 7 years ago I faked it by using a web monitor (my own West Wind Web Monitor app) I was running anyway to monitor my various web sites for uptime, and having the monitor ping my 'service' every 20 seconds to effectively keep ASP.NET alive or fire it back up after a reload. I used a simple scheduler class that also includes some logic for 'self-reloading'. Hacky for sure, but it worked reliably.Luckily today it's much easier and more integrated to get IIS to launch ASP.NET as soon as an Application Pool is started by using the Application Initialization Module. The Application Initialization Module basically allows you to turn on Preloading on the Application Pool and the Site/IIS App, which essentially fires a request through the IIS pipeline as soon as the Application Pool has been launched. This means that effectively your ASP.NET app becomes active immediately, Application_Start is fired making sure your app stays up and running at all times. All the other features like Application Pool recycling and auto-shutdown after idle time still work, but IIS will then always immediately re-launch the application.Getting started with Application InitializationAs of IIS 8 Application Initialization is part of the IIS feature set. For IIS 7 and 7.5 there's a separate download available via Web Platform Installer. Using IIS 8 Application Initialization is an optional install component in Windows or the Windows Server Role Manager: This is an optional component so make sure you explicitly select it.IIS Configuration for Application InitializationInitialization needs to be applied on the Application Pool as well as the IIS Application level. As of IIS 8 these settings can be made through the IIS Administration console.Start with the Application Pool:Here you need to set both the Start Automatically which is always set, and the StartMode which should be set to AlwaysRunning. Both have to be set - the Start Automatically flag is set true by default and controls the starting of the application pool itself while Always Running flag is required in order to launch the application. Without the latter flag set the site settings have no effect.Now on the Site/Application level you can specify whether the site should pre load: Set the Preload Enabled flag to true.At this point ASP.NET apps should auto-load. This is all that's needed to pre-load the site if all you want is to get your site launched automatically.If you want a little more control over the load process you can add a few more settings to your web.config file that allow you to show a static page while the App is starting up. This can be useful if startup is really slow, so rather than displaying blank screen while the user is fiddling their thumbs you can display a static HTML page instead: <system.webServer> <applicationInitialization remapManagedRequestsTo="Startup.htm" skipManagedModules="true"> <add initializationPage="ping.ashx" /> </applicationInitialization> </system.webServer>This allows you to specify a page to execute in a dry run. IIS basically fakes request and pushes it directly into the IIS pipeline without hitting the network. You specify a page and IIS will fake a request to that page in this case ping.ashx which just returns a simple OK string - ie. a fast pipeline request. This request is run immediately after Application Pool restart, and while this request is running and your app is warming up, IIS can display an alternate static page - Startup.htm above. So instead of showing users an empty loading page when clicking a link on your site you can optionally show some sort of static status page that says, "we'll be right back".  I'm not sure if that's such a brilliant idea since this can be pretty disruptive in some cases. Personally I think I prefer letting people wait, but at least get the response they were supposed to get back rather than a random page. But it's there if you need it.Note that the web.config stuff is optional. If you don't provide it IIS hits the default site link (/) and even if there's no matching request at the end of that request it'll still fire the request through the IIS pipeline. Ideally though you want to make sure that an ASP.NET endpoint is hit either with your default page, or by specify the initializationPage to ensure ASP.NET actually gets hit since it's possible for IIS fire unmanaged requests only for static pages (depending how your pipeline is configured).What about AppDomain Restarts?In addition to full Worker Process recycles at the IIS level, ASP.NET also has to deal with AppDomain shutdowns which can occur for a variety of reasons:Files are updated in the BIN folderWeb Deploy to your siteweb.config is changedHard application crashThese operations don't cause the worker process to restart, but they do cause ASP.NET to unload the current AppDomain and start up a new one. Because the features above only apply to Application Pool restarts, AppDomain restarts could also cause your 'ASP.NET service' to stop processing in the background.In order to keep the app running on AppDomain recycles, you can resort to a simple ping in the Application_End event:protected void Application_End() { var client = new WebClient(); var url = App.AdminConfiguration.MonitorHostUrl + "ping.aspx"; client.DownloadString(url); Trace.WriteLine("Application Shut Down Ping: " + url); }which fires any ASP.NET url to the current site at the very end of the pipeline shutdown which in turn ensures that the site immediately starts back up.Manual Configuration in ApplicationHost.configThe above UI corresponds to the following ApplicationHost.config settings. If you're using IIS 7, there's no UI for these flags so you'll have to manually edit them.When you install the Application Initialization component into IIS it should auto-configure the module into ApplicationHost.config. Unfortunately for me, with Mr. Murphy in his best form for me, the module registration did not occur and I had to manually add it.<globalModules> <add name="ApplicationInitializationModule" image="%windir%\System32\inetsrv\warmup.dll" /> </globalModules>Most likely you won't need ever need to add this, but if things are not working it's worth to check if the module is actually registered.Next you need to configure the ApplicationPool and the Web site. The following are the two relevant entries in ApplicationHost.config.<system.applicationHost> <applicationPools> <add name="West Wind West Wind Web Connection" autoStart="true" startMode="AlwaysRunning" managedRuntimeVersion="v4.0" managedPipelineMode="Integrated"> <processModel identityType="LocalSystem" setProfileEnvironment="true" /> </add> </applicationPools> <sites> <site name="Default Web Site" id="1"> <application path="/MPress.Workflow.WebQueueMessageManager" applicationPool="West Wind West Wind Web Connection" preloadEnabled="true"> <virtualDirectory path="/" physicalPath="C:\Clients\…" /> </application> </site> </sites> </system.applicationHost>On the Application Pool make sure to set the autoStart and startMode flags to true and AlwaysRunning respectively. On the site make sure to set the preloadEnabled flag to true.And that's all you should need. You can still set the web.config settings described above as well.ASP.NET as a Service?In the particular application I'm working on currently, we have a queue manager that runs as standalone service that polls a database queue and picks out jobs and processes them on several threads. The service can spin up any number of threads and keep these threads alive in the background while IIS is running doing its own thing. These threads are newly created threads, so they sit completely outside of the IIS thread pool. In order for this service to work all it needs is a long running reference that keeps it alive for the life time of the application.In this particular app there are two components that run in the background on their own threads: A scheduler that runs various scheduled tasks and handles things like picking up emails to send out outside of IIS's scope and the QueueManager. Here's what this looks like in global.asax:public class Global : System.Web.HttpApplication { private static ApplicationScheduler scheduler; private static ServiceLauncher launcher; protected void Application_Start(object sender, EventArgs e) { // Pings the service and ensures it stays alive scheduler = new ApplicationScheduler() { CheckFrequency = 600000 }; scheduler.Start(); launcher = new ServiceLauncher(); launcher.Start(); // register so shutdown is controlled HostingEnvironment.RegisterObject(launcher); }}By keeping these objects around as static instances that are set only once on startup, they survive the lifetime of the application. The code in these classes is essentially unchanged from the Windows Service code except that I could remove the various overrides required for the Windows Service interface (OnStart,OnStop,OnResume etc.). Otherwise the behavior and operation is very similar.In this application ASP.NET serves two purposes: It acts as the host for SignalR and provides the administration interface which allows remote management of the 'service'. I can start and stop the service remotely by shutting down the ApplicationScheduler very easily. I can also very easily feed stats from the queue out directly via a couple of Web requests or (as we do now) through the SignalR service.Registering a Background Object with ASP.NETNotice also the use of the HostingEnvironment.RegisterObject(). This function registers an object with ASP.NET to let it know that it's a background task that should be notified if the AppDomain shuts down. RegisterObject() requires an interface with a Stop() method that's fired and allows your code to respond to a shutdown request. Here's what the IRegisteredObject::Stop() method looks like on the launcher:public void Stop(bool immediate = false) { LogManager.Current.LogInfo("QueueManager Controller Stopped."); Controller.StopProcessing(); Controller.Dispose(); Thread.Sleep(1500); // give background threads some time HostingEnvironment.UnregisterObject(this); }Implementing IRegisterObject should help with reliability on AppDomain shutdowns. Thanks to Justin Van Patten for pointing this out to me on Twitter.RegisterObject() is not required but I would highly recommend implementing it on whatever object controls your background processing to all clean shutdowns when the AppDomain shuts down.Testing it outI'm still in the testing phase with this particular service to see if there are any side effects. But so far it doesn't look like it. With about 50 lines of code I was able to replace the Windows service startup to Web start up - everything else just worked as is. An honorable mention goes to SignalR 2.0's oWin hosting, because with the new oWin based hosting no code changes at all were required, merely a couple of configuration file settings and an assembly directive needed, to point at the SignalR startup class. Sweet!It also seems like SignalR is noticeably faster running inside of IIS compared to self-host. Startup feels faster because of the preload.Starting and Stopping the 'Service'Because the application is running as a Web Server, it's easy to have a Web interface for starting and stopping the services running inside of the service. For our queue manager the SignalR service and front monitoring app has a play and stop button for toggling the queue.If you want more administrative control and have it work more like a Windows Service you can also stop the application pool explicitly from the command line which would be equivalent to stopping and restarting a service.To start and stop from the command line you can use the IIS appCmd tool. To stop:> %windir%\system32\inetsrv\appcmd stop apppool /apppool.name:"Weblog"and to start> %windir%\system32\inetsrv\appcmd start apppool /apppool.name:"Weblog"Note that when you explicitly force the AppPool to stop running either in the UI (on the ApplicationPools page use Start/Stop) or via command line tools, the application pool will not auto-restart immediately. You have to manually start it back up.What's not to like?There are certainly a lot of benefits to running a background service in IIS, but… ASP.NET applications do have more overhead in terms of memory footprint and startup time is a little slower, but generally for server applications this is not a big deal. If the application is stable the service should fire up and stay running indefinitely. A lot of times this kind of service interface can simply be attached to an existing Web application, or if scalability requires be offloaded to its own Web server.Easier to work withBut the ultimate benefit here is that it's much easier to work with a Web app as opposed to a service. While developing I can simply turn off the auto-launch features and launch the service on demand through IIS simply by hitting a page on the site. If I want to shut down an IISRESET -stop will shut down the service easily enough. I can then attach a debugger anywhere I want and this works like any other ASP.NET application. Yes you end up on a background thread for debugging but Visual Studio handles that just fine and if you stay on a single thread this is no different than debugging any other code.SummaryUsing ASP.NET to run background service operations is probably not a super common scenario, but it probably should be something that is considered carefully when building services. Many applications have service like features and with the auto-start functionality of the Application Initialization module, it's easy to build this functionality into ASP.NET. Especially when combined with the notification features of SignalR it becomes very, very easy to create rich services that can also communicate their status easily to the outside world.Whether it's existing applications that need some background processing for scheduling related tasks, or whether you just create a separate site altogether just to host your service it's easy to do and you can leverage the same tool chain you're already using for other Web projects. If you have lots of service projects it's worth considering… give it some thought…© Rick Strahl, West Wind Technologies, 2005-2013Posted in ASP.NET  SignalR  IIS   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • CodePlex Daily Summary for Wednesday, February 17, 2010

    CodePlex Daily Summary for Wednesday, February 17, 2010New ProjectsAcademic Success Accounting System: The system is intended to use by school teacher to set marks to students and estimate their academic success and possibilities. The client applicat...Access.PowerTools: Access PowerTools is currently a sample MS Access add-in project to try & test features of Add-in Express™ 2009 for Microsoft® Office and .net (ht...AntoonCms: AntoonCms makes it easy to maintain a simple website with it's builtin administration pages. It's developed in C# on target Framework 2.0 The CMS...ASP.NET MVC Mehr Lib: Mehr Lib makes it easier for ASP.NET MVC developers to do develop projects. It's developed in C#. This version currently include Ajax master detail...BCryptTool: Developer tool that calculates BCrypt hash codes for strings. BCrypt is an implementation of the Blowfish cipher and a computationally-expensive ha...Coronasoft Cryostasis scripting engine: A scripting engine that allows you to dynamically load plugins from just about any supported .NET language. Its written in C#. Languages supported ...Critical Point Search: Critical Point Searchcritical points: critical pointsFont Family Name Retrieval: This library helps developer to retrieve the font family name from the TTF, OTF and TTC font files, so that developer can display the font without ...jQuery Form Input Hints Plugin: Automatically display hints on input textboxes in your forms using this jQuery plugin. I wrote this code to be as simple and as easy to use as pos...Kojax: kojax projectKronRetro: KronRetro! Making a Habbo Retro just got easier! Powered by PHP & MySQL you can make a Habbo Retro site fast!MVVM Wrapper Kit: MVVM Wrapper Kit makes it easier for View Model programmers to wrap their business objects and collections while preserving change notification and...ObjectCartographer: ObjectCartographer is an object to object mapper and object factory. It's developed in C#.PE-file Reader Writer API (PERWAPI): PERWAPI is a reader writer module for .NET program executables. It has been used as back-end for progamming language compilers such as Gardens Poi...Pinger: A simple Pinger, pings an address until you press a buttonQPV: 0.1: QPV aka Que pelicula es una aplicacion que consiste crear una base de datos potente de peliculas, criticas e informacion para poder filtrar pelicul...SIMD Detector: This SIMD class helps developers to detect the types of SIMD instruction available on users' processor. It supports Intel and AMD CPUs. It is writt...StackOverflow Test Project: Following Andrew Siemer's StackOverflow Knowledge Exchange Project.WeBlog: A blogging platform built on the MVC framework The project will showcase current technologies such as MVC 2, Silverlight 4 and jQuery 1.4. Data pro...Webmedia: this is my webmedia projectWindows Azure RSS Reader: This is and online RSS reader based on the Windows Azure platformWordEditor. A Word Editor for Windows, and an extended RichTextBox control.: This is a word editor that can be used as a stand alone word processor, or added to an existing project.Домашняя Бухгалтерия: Программа для ведения домашнего бухгалтерского учета финансов. New ReleasesAccess.PowerTools: Access PowerTools Add-In Community Edition v.0.0.1: Access PowerTools Add-In Community Edition v.0.0.1 is a sample MS Access add-in project to try & test Add-in Express™ 2009 for Microsoft® Office an...Active CSS: ActiveCSS-0.1.1: revision for version 0.1ASP.NET: Microsoft Ajax Minifier 4.0: The Microsoft Ajax Minifier enables you to improve the performance of your Ajax applications by reducing the size of your Cascading Style Sheet and...ASP.NET MVC Mehr Lib: V1.0: Mehr Lib V1.0 This version currently include ajax master detail combo facilities.ASP.net Ribbon: Version 1.2: New controls : Expandable gallery Color Picker Multi color File Menu Some JS modifications. Some CSS modifications. Includes some functionna...ASP.NET Web Forms Model-View-Presenter (MVP) Contrib: WebForms MVP Contrib CTP6: This is a release of the WebForms MVP Contrib project for WebForms MVP CTP6. Release includes: WebForms MVP Contrib framework Ninject IoC containerAwesomiumDotNet: AwesomiumDotNet 1.2.1: - Added Awesomium 1.5 features: URL filtering, header rewrite rules, SetOpensExternalLinksInCallingFrame. - Numerous fixes and improvements.BCryptTool: BCryptTool v0.1: The Microsoft .NET Framework 3.5 (SP1) is needed to run this program.Buzz Dot Net: Buzz Dot Net v.1.10216: Features Parse Google Buzz feed to Objects Partial MVVM Implementation Partial OptimizationsCanvas VSDOC Intellisense: v1.0.0.0a: canvas-vsdoc.js and canvas-utils.js JavaScript intellisense for HTML5 Canvas element.CheckHeader: CheckHeader v0.8.5: The Microsoft .NET Framework 3.5 (SP1) is needed to run this program.Claymore MVP: Claymore 1.0.2.0: Changelog Added ASP.NET WebForm support via ClaymoreHttpModule class. Added xsd schema for Visual Studio Intellisense within App.config and Web....Dam Gd - URL Shortner: Dam.gd Version 1.1: This is the latest instalment in our URL shortner. It uses The Easy API http://theeasyapi.com to access data that is used for the back-end analyti...D-AMPS: D-AMPS 0.9.1: Initial version.easySMS: easySMS 1.0 Source code: easySMS 1.0 Source codeFont Family Name Retrieval: 1st Release: Version 1.0.0Free Silverlight & WPF Chart Control - Visifire: Visifire Now Supports DataBinding: Hi, Today we are releasing the much awaited DataBinding feature in Visifire 3.0.3 beta 3. Now you can Bind any DataSource at the Series level so t...GenerateTypedBamApi: Version 2.0: Changes in this release: NEW: Export functionality no longer requires Excel to be installed (uses OLE DB vs. Excel Automation; also enables usage i...Gmail Notifier 2: GmailNotifier2 1.2.1: Fixes issues #9652, #9653iTuner - The iTunes Companion: iTuner 1.1.3699: This includes the first pass of the iTuner Librarian including management of dead tracks, duplicates, and empty directories... While I promised a ...jQuery Form Input Hints Plugin: jQuery.InputHints v1.0: jQuery.InputHints v1.0 Includes Standard & minified source Demo HTML file VS2008 SolutionLibWowArmory: LibWowArmory 0.2.3 beta: LibWowArmory 0.2.3 betaThis release of the LibWowArmory source code matches the WoW Armory as of version 3.3.2. Changes since version 0.2.2:Update...Managed Extensibility Framework: MEF Preview 9: We have merged the .net 3.5 and Silverlight 3 into a single zip. The bin folder contains the binaries for .net 3.5 whereas bin\SL contains the bina...MDX Parser,Builder,DOM and OLAP visual controls with Writeback for Silverlight: Ranet.UILibrary.Olap-1.3.3.0-6571.msi: February 16, 2010 * MdxDesigner: Fix for the issue where when an element is clicked, the mouse wheel stops working until the cursor leaves and r...MEFGeneric: MEFGeneric Preview 9: MEFGeneric Preview 9 release.Mesopotamia Experiment: Mesopotamia 1.2.26: Bug Fixes - mud map - progress window - recycle app domains on robotics engine crashes( in command prompt and visual, major work) - fixed rooomba h...Microsoft Solution Framework for Business Intelligence in Media: Release 1.0: This is the public release of the Microsoft Solution Framework for Business Intelligence in Media (Release 1.0).MVVM Wrapper Kit: MVVM Wrapper Beta: A simple test project is included to get you up and running, and wrapping those business objects.nBayes - Bayesian Filtering in C#: nBayes v0.2: nBayes' indexing system is factored in such a way that you can easily replace the index with a custom implementation. This release introduces an ad...NetSqlAzMan - .NET SQL Authorization Manager: 3.6.0.5: 3.6.0.5 16-February-2010 - Fix: SqlAzManSid Class. "Equals" matches object signiture instead of IAzManSid signiture. When a real null object is pas...ObjectCartographer: ObjectCartographer Code 1.0: This is the first release and contains code to help with object to object mapping (including mapping from one object to multiple objects), object f...Office Apps: 0.8.6: Bug fix's, added Calendar.OI - Open Internet: OI HTML and .XAP files (OI offline): this is the HTML code and the XAP file. please right-click the app at http://bit.ly/openinternet and select "install openinternet application to th...PE-file Reader Writer API (PERWAPI): PERWAPI-1.1.3: Perwapi version 1.1.3 is the complete distribution package. It contains Binary files, pdb files and xml files for the PERWAPI and SymbolRW compone...Pinger: Pinger 1.0.0.0 Binary: The Latest BinaryRNA Comparative Analysis Software Tools: RNA Comparative Analysis Software Tools 2.0: RNA Comparative Analysis Software Tools Version 2.0 Note: The RNA Comparative Analysis Software Tools are provided as is, without any warranty. No...SAL- Self Artificial Learning: Artificial Learning working proof of concept: This is a working proof of concept. It includes the Dev version (in .zip format) and the consumer version (in .exe format)SharePoint Management PowerShell scripts: SharePoint 2010 PowerShell Scripts: All the SharePoint 2010 PowerShell Scripts The first file is an Excel 2010 file allowing to find quiclky and easily the new cmdlets available wi...SIMD Detector: 1st Release: Version 1.3 Supports MMX/MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, SSE4a, SSE5, 3DNow.Terminals: Terminals 1.9 Beta Release: This is a beta release so the new features being added to terminals can be tested properly. The major change in this release is that Terminals has...Text Designer Outline Text Library: 9th minor release: Added the ability to select brush, such as gradient brush or texture brush for the text body. Added CSharp library, TextDesignerCSLibrary. Manage...VivoSocial: VivoSocial 7.0.2: This release has several updated modules. See the Support Forums for more details. Since we update modules very often, we will be changing how we d...WatchersNET CKEditor™ Provider for DotNetNuke: CKEditor Provider 1.6.00: changes CKEditor Upgrade to Version 3.2 SVN 5132 File Browser: After File Upload, File will be Auto Selected File Browser: Icons are corrected ...WordEditor. A Word Editor for Windows, and an extended RichTextBox control.: WordEditor Source Code: This contains the latest solution file, with all project files included.Домашняя Бухгалтерия: Alapha Realease: Принимаются ваши предложения по дизайну и функциональности программы.Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)Image Resizer Powertoy Clone for WindowsMicrosoft SQL Server Community & SamplesASP.NETLiveUpload to FacebookMost Active ProjectsDinnerNow.netRawrBlogEngine.NETSimple SavantNB_Store - Free DotNetNuke Ecommerce Catalog Modulepatterns & practices – Enterprise LibraryPHPExcelSharpyjQuery Library for SharePoint Web ServicesFluent Validation for .NET

    Read the article

  • Fluent NHibernate: mapping complex many-to-many (with additional columns) and setting fetch

    - by HackedByChinese
    I need a Fluent NHibernate mapping that will fulfill the following (if nothing else, I'll also take the appropriate NHibernate XML mapping and reverse engineer it). DETAILS I have a many-to-many relationship between two entities: Parent and Child. That is accomplished by an additional table to store the identities of the Parent and Child. However, I also need to define two additional columns on that mapping that provide more information about the relationship. This is roughly how I've defined my types, at least the relevant parts (where Entity is some base type that provides an Id property and checks for equivalence based on that Id): public class Parent : Entity { public virtual IList<ParentChildRelationship> Children { get; protected set; } public virtual void AddChildRelationship(Child child, int customerId) { var relationship = new ParentChildRelationship { CustomerId = customerId, Parent = this, Child = child }; if (Children == null) Children = new List<ParentChildRelationship>(); if (Children.Contains(relationship)) return; relationship.Sequence = Children.Count; Children.Add(relationship); } } public class Child : Entity { // child doesn't care about its relationships } public class ParentChildRelationship { public int CustomerId { get; set; } public Parent Parent { get; set; } public Child Child { get; set; } public int Sequence { get; set; } public override bool Equals(object obj) { if (ReferenceEquals(null, obj)) return false; if (ReferenceEquals(this, obj)) return true; var other = obj as ParentChildRelationship; if (return other == null) return false; return (CustomerId == other.CustomerId && Parent == other.Parent && Child == other.Child); } public override int GetHashCode() { unchecked { int result = CustomerId; result = Parent == null ? 0 : (result*397) ^ Parent.GetHashCode(); result = Child == null ? 0 : (result*397) ^ Child.GetHashCode(); return result; } } } The tables in the database look approximately like (assume primary/foreign keys and forgive syntax): create table Parent ( id int identity(1,1) not null ) create table Child ( id int identity(1,1) not null ) create table ParentChildRelationship ( customerId int not null, parent_id int not null, child_id int not null, sequence int not null ) I'm OK with Parent.Children being a lazy loaded property. However, the ParentChildRelationship should eager load ParentChildRelationship.Child. Furthermore, I want to use a Join when I eager load. The SQL, when accessing Parent.Children, NHibernate should generate an equivalent query to: SELECT * FROM ParentChildRelationship rel LEFT OUTER JOIN Child ch ON rel.child_id = ch.id WHERE parent_id = ? OK, so to do that I have mappings that look like this: ParentMap : ClassMap<Parent> { public ParentMap() { Table("Parent"); Id(c => c.Id).GeneratedBy.Identity(); HasMany(c => c.Children).KeyColumn("parent_id"); } } ChildMap : ClassMap<Child> { public ChildMap() { Table("Child"); Id(c => c.Id).GeneratedBy.Identity(); } } ParentChildRelationshipMap : ClassMap<ParentChildRelationship> { public ParentChildRelationshipMap() { Table("ParentChildRelationship"); CompositeId() .KeyProperty(c => c.CustomerId, "customerId") .KeyReference(c => c.Parent, "parent_id") .KeyReference(c => c.Child, "child_id"); Map(c => c.Sequence).Not.Nullable(); } } So, in my test if i try to get myParentRepo.Get(1).Children, it does in fact get me all the relationships and, as I access them from the relationship, the Child objects (for example, I can grab them all by doing parent.Children.Select(r => r.Child).ToList()). However, the SQL that NHibernate is generating is inefficient. When I access parent.Children, NHIbernate does a SELECT * FROM ParentChildRelationship WHERE parent_id = 1 and then a SELECT * FROM Child WHERE id = ? for each child in each relationship. I understand why NHibernate is doing this, but I can't figure out how to set up the mapping to make NHibernate query the way I mentioned above.

    Read the article

  • [OpenGL ES - Android] Better way to generate tiles

    - by Inoe
    Hi ! I'll start by saying that i'm REALLY new to OpenGL ES (I started yesterday =), but I do have some Java and other languages experience. I've looked a lot of tutorials, of course Nehe's ones and my work is mainly based on that. As a test, I started creating a "tile generator" in order to create a small Zelda-like game (just moving a dude in a textured square would be awsome :p). So far, I have achieved a working tile generator, I define a char map[][] array to store wich tile is on : private char[][] map = { {0, 0, 20, 11, 11, 11, 11, 4, 0, 0}, {0, 20, 16, 12, 12, 12, 12, 7, 4, 0}, {20, 16, 17, 13, 13, 13, 13, 9, 7, 4}, {21, 24, 18, 14, 14, 14, 14, 8, 5, 1}, {21, 22, 25, 15, 15, 15, 15, 6, 2, 1}, {21, 22, 23, 0, 0, 0, 0, 3, 2, 1}, {21, 22, 23, 0, 0, 0, 0, 3, 2, 1}, {26, 0, 0, 0, 0, 0, 0, 3, 2, 1}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 1}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 1} }; It's working but I'm no happy with it, I'm sure there is a beter way to do those things : 1) Loading Textures : I create an ugly looking array containing the tiles I want to use on that map : private int[] textures = { R.drawable.herbe, //0 R.drawable.murdroite_haut, //1 R.drawable.murdroite_milieu, //2 R.drawable.murdroite_bas, //3 R.drawable.angledroitehaut_haut, //4 R.drawable.angledroitehaut_milieu, //5 }; (I cutted this on purpose, I currently load 27 tiles) All of theses are stored in the drawable folder, each one is a 16*16 tile. I then use this array to generate the textures and store them in a HashMap for a later use : int[] tmp_tex = new int[textures.length]; gl.glGenTextures(textures.length, tmp_tex, 0); texturesgen = tmp_tex; //Store the generated names in texturesgen for(int i=0; i < textures.length; i++) { //Bitmap bmp = BitmapFactory.decodeResource(context.getResources(), textures[i]); InputStream is = context.getResources().openRawResource(textures[i]); Bitmap bitmap = null; try { //BitmapFactory is an Android graphics utility for images bitmap = BitmapFactory.decodeStream(is); } finally { //Always clear and close try { is.close(); is = null; } catch (IOException e) { } } // Get a new texture name // Load it up this.textureMap.put(new Integer(textures[i]),new Integer(i)); int tex = tmp_tex[i]; gl.glBindTexture(GL10.GL_TEXTURE_2D, tex); //Create Nearest Filtered Texture gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); //Different possible texture parameters, e.g. GL10.GL_CLAMP_TO_EDGE gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT); //Use the Android GLUtils to specify a two-dimensional texture image from our bitmap GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0); bitmap.recycle(); } I'm quite sure there is a better way to handle that... I just was unable to figure it. If someone has an idea, i'm all ears. 2) Drawing the tiles What I did was create a single square and a single texture map : /** The initial vertex definition */ private float vertices[] = { -1.0f, -1.0f, 0.0f, //Bottom Left 1.0f, -1.0f, 0.0f, //Bottom Right -1.0f, 1.0f, 0.0f, //Top Left 1.0f, 1.0f, 0.0f //Top Right }; private float texture[] = { //Mapping coordinates for the vertices 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f }; Then, in my draw function, I loop through the map to define the texture to use (after pointing to and enabling the buffers) : for(int y = 0; y < Y; y++){ for(int x = 0; x < X; x++){ tile = map[y][x]; try { //Get the texture from the HashMap int textureid = ((Integer) this.textureMap.get(new Integer(textures[tile]))).intValue(); gl.glBindTexture(GL10.GL_TEXTURE_2D, this.texturesgen[textureid]); } catch(Exception e) { return; } //Draw the vertices as triangle strip gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, vertices.length / 3); gl.glTranslatef(2.0f, 0.0f, 0.0f); //A square takes 2x so I move +2x before drawing the next tile } gl.glTranslatef(-(float)(2*X), -2.0f, 0.0f); //Go back to the begining of the map X-wise and move 2y down before drawing the next line } This works great by I really think that on a 1000*1000 or more map, it will be lagging as hell (as a reminder, this is a typical Zelda world map : http://vgmaps.com/Atlas/SuperNES/LegendOfZelda-ALinkToThePast-LightWorld.png ). I've read things about Vertex Buffer Object and DisplayList but I couldn't find a good tutorial and nodoby seems to be OK on wich one is the best / has the better support (T1 and Nexus One are ages away). I think that's it, I've putted a lot of code but I think it helps. Thanks in advance !

    Read the article

  • Consuming a PHP SOAP WebService with ASP.NET

    - by Jamie
    I'm having some major issues trying to consume my PHP SOAP webservice using ASP.NET. The webservice in question is based on the PHP SOAP extension and is descibed by the following WSDL: <?xml version="1.0" encoding="UTF-8" ?> <definitions name="MyServices" targetNamespace="http://mydomain.com/api/soap/v11/services" xmlns:tns="http://mydomain.com/api/soap/v11/services" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsd1="http://mydomain.com/api/soap/v11/services" xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" xmlns="http://schemas.xmlsoap.org/wsdl/"> <types> <schema targetNamespace="http://mydomain.com/api/soap/v11/services" xmlns="http://www.w3.org/2001/XMLSchema"> <complexType name="ServiceType"> <all> <element name="id" type="xsd:int" minOccurs="1" maxOccurs="1" /> <element name="name" type="xsd:string" minOccurs="1" maxOccurs="1" /> <element name="cost" type="xsd:float" minOccurs="1" maxOccurs="1" /> </all> </complexType> <complexType name="ArrayOfServiceType"> <all> <element name="Services" type="ServiceType" minOccurs="0" maxOccurs="unbounded" /> </all> </complexType> </schema> </types> <message name="getServicesRequest"> <part name="postcode" type="xsd:string" /> </message> <message name="getServicesResponse"> <part name="Result" type="xsd1:ArrayOfServiceType"/> </message> <portType name="ServicesPortType"> <operation name="getServices"> <input message="tns:getServicesRequest"/> <output message="tns:getServicesResponse"/> </operation> </portType> <binding name="ServicesBinding" type="tns:ServicesPortType"> <soap:binding style="document" transport="http://schemas.xmlsoap.org/soap/http"/> <operation name="getServices"> <soap:operation soapAction="http://mydomain.com/api/soap/v11/services/getServices" /> <input> <soap:body use="encoded" namespace="urn:my:services" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" /> </input> <output> <soap:body use="encoded" namespace="urn:my:services" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" /> </output> </operation> </binding> <service name="MyServices"> <port name="ServicesPort" binding="tns:ServicesBinding"> <soap:address location="http://mydomain.com/api/soap/v11/services"/> </port> </service> </definitions> I can successfully generate a proxy class from this WSDL in Visual Studio, but upon trying to invoke the getServices method I am presented with an exception: System.Web.Services.Protocols.SoapHeaderException: Procedure 'string' not present After inspecting the raw post data at the SOAP server end, my PHP SOAP client is making requests like this: <?xml version="1.0" encoding="UTF-8"?> <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/" SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"> <SOAP-ENV:Body> <postcode xsi:type="xsd:string">ln4 4nq</postcode> </SOAP-ENV:Body> </SOAP-ENV:Envelope> Whereas the .Net proxy class is doing this: <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/" xmlns:tns="http://mydomain.com/api/soap/v11/services" xmlns:types="http://mydomain.com/api/soap/v11/services/encodedTypes" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <soap:Body soap:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"> <xsd:string xsi:type="xsd:string">LN4 4NQ</xsd:string> </soap:Body> </soap:Envelope> I can only assume the difference in the way the postcode parameter is being sent is where the problem lies, but as primarily a PHP developer I'm at a loss as to what's occuring here. I have a feeling I'm simply missing something vital in my WSDL as I've seen countless examples of 'Consuming PHP SOAP WebServices with .Net' which appear to suggest that it 'just works'. Any suggestion as to where i've slipped up here would be greatly appreciated. I've currently spent almost an entire day on this now ;-) Thanks in advance, Jamie

    Read the article

< Previous Page | 904 905 906 907 908 909 910 911 912 913 914 915  | Next Page >