Search Results

Search found 9094 results on 364 pages for 'dynamic entities'.

Page 126/364 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • Relationships in a Chen ERD

    - by Nibroc A Rehpotsirhc
    I am working on a Chen ERD to model our organizations merchandise. Our central entity is a Style. We have supplemental entities of Color and Season. I am defining our assortment as the relationship between these three entities, and this relationship itself will have attributes and is defined by the three entities which participate in the mandatory relationship. The rules are; Many Styles can be offered in a Season, and a Style can be offered in many Seasons. Within a Season, a Style can be offered in Many Colors. I then have 2 other entities, one of which I believe is a weak entity, Climate, and the other may be weak, but I am not sure, this being Transaction Channel. I am thinking of these as relationships off of a relationship? Meaning, for a given Style/Color combination offered in a Season, it can be available through 1 or more Transaction Channels. Additionally, within a season, a given Style/Color combination can be intended for 1 or more Climates. Is it valid to have relationships off of relationships? Or does this requirement dictate that I should think of this Style/Color/Season relationship as an entity itself, and define the relationships to Climate and Transaction Channel off of this entity?

    Read the article

  • Designing a flexible tile-based engine

    - by Vee
    I'm trying to create a flexible tile-based game engine to make all sorts of non-realtime puzzle games, just as Bejeweled, Civilization, Sokoban, and so on. The first approach I had was to have a 2D array of Tile objects, and then have classes inheriting from Tile that represented the game objects. Unfortunately that way I couldn't stack more game elements on the same Tile without having a 3D array. Then I did something different: I still had the 2D array of Tile objects, but every Tile object contained a List where I put and different entities. This worked fine until 20 minutes ago, when I realized that it's too expensive to do many things, look at this example: I have a Wall entity. Every update I have to check the 8 adjacent Tiles, then check all of the entities in the Tile's List, check if any of those entities is a Wall, then finally draw the correct sprite. (This is done to draw walls that are next to each other seamlessly) The only solution I see now is having a 3D array, with many layers, that could suit every situation. But that way I can't stack two entities that share the same layer on the same tile. Whenever I want to do that I have to create a new layer. Is there a better solution? What would you do?

    Read the article

  • List<T>.AddRange is causing a brief Update/Draw delay

    - by Justin Skiles
    I have a list of entities which implement an ICollidable interface. This interface is used to resolve collisions between entities. My entities are thus: Players Enemies Projectiles Items Tiles On each game update (about 60 t/s), I am clearing the list and adding the current entities based on the game state. I am accomplishing this via: collidableEntities.Clear(); collidableEntities.AddRange(players); collidableEntities.AddRange(enemies); collidableEntities.AddRange(projectiles); collidableEntities.AddRange(items); collidableEntities.AddRange(camera.VisibleTiles); Everything works fine until I add the visible tiles to the list. The first ~1-2 seconds of running the game loop causes a visible hiccup that delays drawing (so I can see a jitter in the rendering). I can literally remove/add the line that adds the tiles and see the jitter occur and not occur, so I have narrowed it down to that line. My question is, why? The list of VisibleTiles is about 450-500 tiles, so it's really not that much data. Each tile contains a Texture2D (image) and a Vector2 (position) to determine what is rendered and where. I'm going to keep looking, but from the top of my head, I can't understand why only the first 1-2 seconds hiccups but is then smooth from there on out. Any advice is appreciated.

    Read the article

  • Keeping a domain model consistent with actual data

    - by fstuijt
    Recently domain driven design got my attention, and while thinking about how this approach could help us I came across the following problem. In DDD the common approach is to retrieve entities (or better, aggregate roots) from a repository which acts as a in-memory collection of these entities. After these entities have been retrieved, they can be updated or deleted by the user, however after retrieval they are essentially disconnected from the data source and one must actively inform the repository to update the data source and make is consistent again with our in-memory representation. What is the DDD approach to retrieving entities that should remain connected to the data source? For example, in our situation we retrieve a series of sensors that have a specific measurement during retrieval. Over time, these measurement values may change and our business logic in the domain model should respond to these changes properly. E.g., domain events may be raised if a sensor value exceeds a predefined threshold. However, using the repository approach, these sensor values are just snapshots, and are disconnected from the data source. Does any of you have an idea on how to solve this following the DDD approach?

    Read the article

  • How to avoid tons of `instanceof` in collision detection?

    - by Prog
    Consider a simple game with 4 kinds of entities: Robots, Dogs, Missiles, Walls. Here's a simple collision-detection mechanism in psuedocode: (I know, O(n^2). Irrelevant for this question). for(Entity entityA in entities){ for(Entity entityB in entities){ if(collision(entityA, entityB)){ if(entityA instanceof Robot && entityB instanceof Dog) entityB.die(); if(entityA instanceof Robot && entityB instanceof Missile){ entityA.die(); entityB.die(); } if(entityA instanceof Missile && entityB instanceof Wall) entityB.die(); // .. and so on } } } Obviously this is very ugly, and will get bigger and harder to maintain the more entities there are, and the more conditions there are. One option to make this better is to have separate lists for each kind of entity. For example a Robots list, a Dogs list etc. And than check for collisions of all Robots with Dogs, and all Dogs with Walls, etc. This is better, but I still don't think it's good. So my question is: The collision detection system spotted a collision. Now what? What is the common way to react to the collision? Should the system notify the entity itself that it collided with something, and have it decide for itself how to react? E.g. entityA.reactToCollision(entityB). Or is there some other solution?

    Read the article

  • What does it mean when ARP shows <incomplete> on eth1

    - by Geoff Dalgas
    We have been using HAProxy along with heartbeat from the Linux-HA project. We are using two linux instances to provide a failover. Each server has with their own public IP and a single IP which is shared between the two using a virtual interface (eth1:1) at IP: 69.59.196.211 The virtual interface (eth1:1) IP 69.59.196.211 is configured as the gateway for the windows servers behind them and we use ip_forwarding to route traffic. We are experiencing an occasional network outage on one of our windows servers behind our linux gateways. HAProxy will detect the server is offline which we can verify by remoting to the failed server and attempting to ping the gateway: Pinging 69.59.196.211 with 32 bytes of data: Reply from 69.59.196.220: Destination host unreachable. Running arp -a on this failed server shows that there is no entry for the gateway address (69.59.196.211): Interface: 69.59.196.220 --- 0xa Internet Address Physical Address Type 69.59.196.161 00-26-88-63-c7-80 dynamic 69.59.196.210 00-15-5d-0a-3e-0e dynamic 69.59.196.212 00-21-5e-4d-45-c9 dynamic 69.59.196.213 00-15-5d-00-b2-0d dynamic 69.59.196.215 00-21-5e-4d-61-1a dynamic 69.59.196.217 00-21-5e-4d-2c-e8 dynamic 69.59.196.219 00-21-5e-4d-38-e5 dynamic 69.59.196.221 00-15-5d-00-b2-0d dynamic 69.59.196.222 00-15-5d-0a-3e-09 dynamic 69.59.196.223 ff-ff-ff-ff-ff-ff static 224.0.0.22 01-00-5e-00-00-16 static 224.0.0.252 01-00-5e-00-00-fc static 225.0.0.1 01-00-5e-00-00-01 static On our linux gateway instances arp -a shows: peak-colo-196-220.peak.org (69.59.196.220) at <incomplete> on eth1 stackoverflow.com (69.59.196.212) at 00:21:5e:4d:45:c9 [ether] on eth1 peak-colo-196-215.peak.org (69.59.196.215) at 00:21:5e:4d:61:1a [ether] on eth1 peak-colo-196-219.peak.org (69.59.196.219) at 00:21:5e:4d:38:e5 [ether] on eth1 peak-colo-196-222.peak.org (69.59.196.222) at 00:15:5d:0a:3e:09 [ether] on eth1 peak-colo-196-209.peak.org (69.59.196.209) at 00:26:88:63:c7:80 [ether] on eth1 peak-colo-196-217.peak.org (69.59.196.217) at 00:21:5e:4d:2c:e8 [ether] on eth1 Why would arp occasionally set the entry for this failed server as <incomplete>? Should we be defining our arp entries statically? I've always left arp alone since it works 99% of the time, but in this one instance it appears to be failing. Are there any additional troubleshooting steps we can take help resolve this issue? THINGS WE HAVE TRIED I added a static arp entry for testing on one of the linux gateways which still didn't help. root@haproxy2:~# arp -a peak-colo-196-215.peak.org (69.59.196.215) at 00:21:5e:4d:61:1a [ether] on eth1 peak-colo-196-221.peak.org (69.59.196.221) at 00:15:5d:00:b2:0d [ether] on eth1 stackoverflow.com (69.59.196.212) at 00:21:5e:4d:45:c9 [ether] on eth1 peak-colo-196-219.peak.org (69.59.196.219) at 00:21:5e:4d:38:e5 [ether] on eth1 peak-colo-196-209.peak.org (69.59.196.209) at 00:26:88:63:c7:80 [ether] on eth1 peak-colo-196-217.peak.org (69.59.196.217) at 00:21:5e:4d:2c:e8 [ether] on eth1 peak-colo-196-220.peak.org (69.59.196.220) at 00:21:5e:4d:30:8d [ether] PERM on eth1 root@haproxy2:~# arp -i eth1 -s 69.59.196.220 00:21:5e:4d:30:8d root@haproxy2:~# ping 69.59.196.220 PING 69.59.196.220 (69.59.196.220) 56(84) bytes of data. --- 69.59.196.220 ping statistics --- 7 packets transmitted, 0 received, 100% packet loss, time 6006ms Rebooting the windows web server solves this issue temporarily with no other changes to the network but our experience shows this issue will come back. Swapping network cards and switches I noticed the link light on the port of the switch for the failed windows server was running at 100Mb instead of 1Gb on the failed interface. I moved the cable to several other open ports and the link indicated 100Mb for each port that I tried. I also swapped the cable with the same result. I tried changing the properties of the network card in windows and the server locked up and required a hard reset after clicking apply. This windows server has two physical network interfaces so I have swapped the cables and network settings on the two interfaces to see if the problem follows the interface. If the public interface goes down again we will know that it is not an issue with the network card. (We also tried another switch we have on hand, no change) Changing network hardware driver versions We've had the same problem with the latest Broadcom driver, as well as the built-in driver that ships in Windows Server 2008 R2. Replacing network cables As a last ditch effort we remembered another change that occurred was the replacement of all of the patch cords between our servers / switch. We had purchased two sets, one green of lengths 1ft - 3ft for the private interfaces and another set of red cables for the public interfaces. We swapped out all of the public interface patch cables with a different brand and ran our servers without issue for a full week ... aaaaaand then the problem recurred. Disable checksum offload, remove TProxy We also tried disabling TCP/IP checksum offload in the driver, no change. We're now pulling out TProxy and moving to a more traditional x-forwarded-for network arrangement without any fancy IP address rewriting. We'll see if that helps.

    Read the article

  • How to implement dynamic binary search for search and insert operations of n element (C or C++)

    - by iecut
    The idea is to use multiple arrays, each of length 2^k, to store n elements, according to binary representation of n.Each array is sorted and different arrays are not ordered in any way. In the above mentioned data structure, SEARCH is carried out by a sequence of binary search on each array. INSERT is carried out by a sequence of merge of arrays of the same length until an empty array is reached. More Detail: Lets suppose we have a vertical array of length 2^k and to each node of that array there attached horizontal array of length 2^k. That is, to the first node of vertical array, a horizontal array of length 2^0=1 is connected,to the second node of vertical array, a horizontal array of length 2^1= 2 is connected and so on. So the insert is first carried out in the first horizontal array, for the second insert the first array becomes empty and second horizontal array is full with 2 elements, for the third insert 1st and 2nd array horiz. array are filled and so on. I implemented the normal binary search for search and insert as follows: int main() { int a[20]= {0}; int n, i, j, temp; int *beg, *end, *mid, target; printf(" enter the total integers you want to enter (make it less then 20):\n"); scanf("%d", &n); if (n = 20) return 0; printf(" enter the integer array elements:\n" ); for(i = 0; i < n; i++) { scanf("%d", &a[i]); } // sort the loaded array, binary search! for(i = 0; i < n-1; i++) { for(j = 0; j < n-i-1; j++) { if (a[j+1] < a[j]) { temp = a[j]; a[j] = a[j+1]; a[j+1] = temp; } } } printf(" the sorted numbers are:"); for(i = 0; i < n; i++) { printf("%d ", a[i]); } // point to beginning and end of the array beg = &a[0]; end = &a[n]; // use n = one element past the loaded array! // mid should point somewhere in the middle of these addresses mid = beg += n/2; printf("\n enter the number to be searched:"); scanf("%d",&target); // binary search, there is an AND in the middle of while()!!! while((beg <= end) && (*mid != target)) { // is the target in lower or upper half? if (target < *mid) { end = mid - 1; // new end n = n/2; mid = beg += n/2; // new middle } else { beg = mid + 1; // new beginning n = n/2; mid = beg += n/2; // new middle } } // find the target? if (*mid == target) { printf("\n %d found!", target); } else { printf("\n %d not found!", target); } getchar(); // trap enter getchar(); // wait return 0; } Could anyone please suggest how to modify this program or a new program to implement dynamic binary search that works as explained above!!

    Read the article

  • Cisco PIX firewall blocking inbound Exchange email

    - by sumsaricum
    [Cisco PIX, SBS2003] I can telnet server port 25 from inside but not outside, hence all inbound email is blocked. (as an aside, inbox on iPhones do not list/update emails, but calendar works a charm) I'm inexperienced in Cisco PIX and looking for some assistance before mails start bouncing :/ interface ethernet0 auto interface ethernet1 100full nameif ethernet0 outside security0 nameif ethernet1 inside security100 hostname pixfirewall domain-name ciscopix.com fixup protocol dns maximum-length 512 fixup protocol ftp 21 fixup protocol h323 h225 1720 fixup protocol h323 ras 1718-1719 fixup protocol http 80 fixup protocol rsh 514 fixup protocol rtsp 554 fixup protocol sip 5060 fixup protocol sip udp 5060 fixup protocol skinny 2000 no fixup protocol smtp 25 fixup protocol sqlnet 1521 fixup protocol tftp 69 names name 192.168.1.10 SERVER access-list inside_outbound_nat0_acl permit ip 192.168.1.0 255.255.255.0 192.168.1.96 255.255.255.240 access-list outside_cryptomap_dyn_20 permit ip any 192.168.1.96 255.255.255.240 access-list outside_acl permit tcp any host 213.xxx.xxx.xxx eq 3389 access-list outside_acl permit tcp any interface outside eq ftp access-list outside_acl permit tcp any host 213.xxx.xxx.xxx eq https access-list outside_acl permit tcp any host 213.xxx.xxx.xxx eq www access-list outside_acl permit tcp any interface outside eq 993 access-list outside_acl permit tcp any interface outside eq imap4 access-list outside_acl permit tcp any interface outside eq 465 access-list outside_acl permit tcp any host 213.xxx.xxx.xxx eq smtp access-list outside_cryptomap_dyn_40 permit ip any 192.168.1.96 255.255.255.240 access-list COMPANYVPN_splitTunnelAcl permit ip 192.168.1.0 255.255.255.0 any access-list COMPANY_splitTunnelAcl permit ip 192.168.1.0 255.255.255.0 any access-list outside_cryptomap_dyn_60 permit ip any 192.168.1.96 255.255.255.240 access-list COMPANY_VPN_splitTunnelAcl permit ip 192.168.1.0 255.255.255.0 any access-list outside_cryptomap_dyn_80 permit ip any 192.168.1.96 255.255.255.240 pager lines 24 icmp permit host 217.157.xxx.xxx outside mtu outside 1500 mtu inside 1500 ip address outside 213.xxx.xxx.xxx 255.255.255.128 ip address inside 192.168.1.1 255.255.255.0 ip audit info action alarm ip audit attack action alarm ip local pool VPN 192.168.1.100-192.168.1.110 pdm location 0.0.0.0 255.255.255.128 outside pdm location 0.0.0.0 255.255.255.0 inside pdm location 217.yyy.yyy.yyy 255.255.255.255 outside pdm location SERVER 255.255.255.255 inside pdm logging informational 100 pdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list inside_outbound_nat0_acl nat (inside) 1 0.0.0.0 0.0.0.0 0 0 static (inside,outside) tcp 213.xxx.xxx.xxx 3389 SERVER 3389 netmask 255.255.255.255 0 0 static (inside,outside) tcp 213.xxx.xxx.xxx smtp SERVER smtp netmask 255.255.255.255 0 0 static (inside,outside) tcp 213.xxx.xxx.xxx https SERVER https netmask 255.255.255.255 0 0 static (inside,outside) tcp 213.xxx.xxx.xxx www SERVER www netmask 255.255.255.255 0 0 static (inside,outside) tcp interface imap4 SERVER imap4 netmask 255.255.255.255 0 0 static (inside,outside) tcp interface 993 SERVER 993 netmask 255.255.255.255 0 0 static (inside,outside) tcp interface 465 SERVER 465 netmask 255.255.255.255 0 0 static (inside,outside) tcp interface ftp SERVER ftp netmask 255.255.255.255 0 0 access-group outside_acl in interface outside route outside 0.0.0.0 0.0.0.0 213.zzz.zzz.zzz timeout xlate 0:05:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 rpc 0:10:00 h225 1:00:00 timeout h323 0:05:00 mgcp 0:05:00 sip 0:30:00 sip_media 0:02:00 timeout sip-disconnect 0:02:00 sip-invite 0:03:00 timeout uauth 0:05:00 absolute aaa-server TACACS+ protocol tacacs+ aaa-server TACACS+ max-failed-attempts 3 aaa-server TACACS+ deadtime 10 aaa-server RADIUS protocol radius aaa-server RADIUS max-failed-attempts 3 aaa-server RADIUS deadtime 10 aaa-server RADIUS (inside) host SERVER *** timeout 10 aaa-server LOCAL protocol local http server enable http 217.yyy.yyy.yyy 255.255.255.255 outside http 192.168.1.0 255.255.255.0 inside no snmp-server location no snmp-server contact snmp-server community public no snmp-server enable traps floodguard enable sysopt connection permit-ipsec crypto ipsec transform-set ESP-3DES-MD5 esp-3des esp-md5-hmac crypto dynamic-map outside_dyn_map 20 match address outside_cryptomap_dyn_20 crypto dynamic-map outside_dyn_map 20 set transform-set ESP-3DES-MD5 crypto dynamic-map outside_dyn_map 40 match address outside_cryptomap_dyn_40 crypto dynamic-map outside_dyn_map 40 set transform-set ESP-3DES-MD5 crypto dynamic-map outside_dyn_map 60 match address outside_cryptomap_dyn_60 crypto dynamic-map outside_dyn_map 60 set transform-set ESP-3DES-MD5 crypto dynamic-map outside_dyn_map 80 match address outside_cryptomap_dyn_80 crypto dynamic-map outside_dyn_map 80 set transform-set ESP-3DES-MD5 crypto map outside_map 65535 ipsec-isakmp dynamic outside_dyn_map crypto map outside_map client authentication RADIUS LOCAL crypto map outside_map interface outside isakmp enable outside isakmp policy 20 authentication pre-share isakmp policy 20 encryption 3des isakmp policy 20 hash md5 isakmp policy 20 group 2 isakmp policy 20 lifetime 86400 telnet 217.yyy.yyy.yyy 255.255.255.255 outside telnet 0.0.0.0 0.0.0.0 inside telnet timeout 5 ssh 217.yyy.yyy.yyy 255.255.255.255 outside ssh 0.0.0.0 255.255.255.0 inside ssh timeout 5 management-access inside console timeout 0 dhcpd address 192.168.1.20-192.168.1.40 inside dhcpd dns SERVER 195.184.xxx.xxx dhcpd wins SERVER dhcpd lease 3600 dhcpd ping_timeout 750 dhcpd auto_config outside dhcpd enable inside : end I have Kiwi SysLog running but could use some pointers in that regard to narrow down the torrent of log messages, if that helps?!

    Read the article

  • Is there any better way for creating a dynamic HTML table without using any javascript library like

    - by piemesons
    Dont worry we dont need to find out any bug in this code.. Its working perfectly.:-P My boss came to me and said "Hey just tell me whats the best of way of writing code for a dynamic HTML table (add row, delete row, update row).No need to add any CSS. Just javascript. No Jquery library etc. I was confused that in the middle of the project why he asking for some stupid exercise like this. What ever i wrote the following code and mailed him and after 15 mins i got a mail from him. " I was expecting much better code from a guy like you. Anyways good job monkey.(And with a picture of monkey as attachment.) thats was the mail. Line by line. I want to reply him but before that i want to know about the quality of my code. Is this really shitty...!!! Or he was just making fun of mine. I dont think that code is really shitty. Still correct me if you can.Code is working perfectly fine. Just copy paste it in a HTML file. <html> <head> <title> Exercise CSS </title> <script type="text/javascript"> function add_row() { var table = document.getElementById('table'); var rowCount = table.rows.length; var row = table.insertRow(rowCount); var cell1 = row.insertCell(0); var element1 = document.createElement("input"); element1.type = "text"; cell1.appendChild(element1); var cell2 = row.insertCell(1); var element2 = document.createElement("input"); element2.type = "text"; cell2.appendChild(element2); var cell3 = row.insertCell(2); cell3.innerHTML = ' <span onClick="edit(this)">Edit</span>/<span onClick="delete_row(this)">Delete</span>'; cell3.setAttribute("style", "display:none;"); var cell4 = row.insertCell(3); cell4.innerHTML = '<span onClick="save(this)">Save</span>'; } function save(e) { var elTableCells = e.parentNode.parentNode.getElementsByTagName("td"); elTableCells[0].innerHTML=elTableCells[0].firstChild.value; elTableCells[1].innerHTML=elTableCells[1].firstChild.value; elTableCells[2].setAttribute("style", "display:block;"); elTableCells[3].setAttribute("style", "display:none;"); } function edit(e) { var elTableCells = e.parentNode.parentNode.getElementsByTagName("td"); elTableCells[0].innerHTML='<input type="text" value="'+elTableCells[0].innerHTML+'">'; elTableCells[1].innerHTML='<input type="text" value="'+elTableCells[1].innerHTML+'">'; elTableCells[2].setAttribute("style", "display:none;"); elTableCells[3].setAttribute("style", "display:block;"); } function delete_row(e) { e.parentNode.parentNode.parentNode.removeChild(e.parentNode.parentNode); } </script> </head> <body > <div id="display"> <table id='table'> <tr id='id'> <td> Piemesons </td> <td> 23 </td> <td > <span onClick="edit(this)">Edit</span>/<span onClick="delete_row(this)">Delete</span> </td> <td style="display:none;"> <span onClick="save(this)">Save</span> </td> </tr> </table> <input type="button" value="Add new row" onClick="add_row();" /> </div> </body>

    Read the article

  • how do I use html block snippets with dynamic content inside a django template that extends another

    - by stackoverflowusername
    Hi. Can someone please help me figure out a way to achieve the following (see snippets below) in Django templates? I know that you cannot use more than one extends, but I am new to django and I do not know the proper syntax for something like this. I want to be able to do this so that I can use my nested div layout for css reasons without having to type it like that each time and risking a typo. In words, I want to be able to have a page template extend my base.html file and then use html snippets of dynamic template content (i.e. template for loops or other template logic devices, not just a context variable I set from my view controller). ------------------------------------------------------------ base.html ------------------------------------------------------------ <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <meta http-equiv="Content-Type" content="text/html;charset=utf-8" /> <title>{% block title %}Title{% endblock %}</title> </head> <body> <div class="wrapper"> <div class="header"> This is the common header </div> <div class="nav"> This is the common nav </div> {% if messages %} <div class="messages"> <ul> {% for message in messages %} <li{% if message.tags %} class="{{ message.tags }}"{% endif %}>{{ message }}</li> {% endfor %} </ul> </div> {% endif %} <div class="content"> {% block content %}Page Content{% endblock %} </div> <div class="footer"> This is the common footer </div> </div> </body> </html> ------------------------------------------------------------ columnlayout2.html ------------------------------------------------------------ <div class="twocol container2"> <div class="container1"> <div class="col1"> {% block twocol_col1 %}{% endblock %} </div> <div class="col2"> {% block twocol_col2 %}{% endblock %} </div> </div> </div> ------------------------------------------------------------ columnlayout3.html ------------------------------------------------------------ <div class="threecol container3"> <div class="container2"> <div class="container1"> <div class="col1"> {% block threecol_col1 %}{% endblock %} </div> <div class="col2"> {% block threecol_col2 %}{% endblock %} </div> <div class="col3"> {% block threecol_col3 %}{% endblock %} </div> </div> </div> </div> ------------------------------------------------------------ page.html ------------------------------------------------------------ {% extends "base.html" %} {% block content %} {% extends "columnlayout2.html" %} {% block twocol_col1 %}twocolumn column 1{% endblock %} {% block twocol_col2 %}twocolumn column 2{% endblock %} {% extends "columnlayout3.html" %} {% block threecol_col1 %}threecol column 1{% endblock %} {% block threecol_col2 %}threecol column 2{% endblock %} {% block threecol_col3 %}threecol column 3{% endblock %} {% endblock %} ------------------------------------------------------------ page.html output ------------------------------------------------------------ <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <meta http-equiv="Content-Type" content="text/html;charset=utf-8" /> <title>Title</title> </head> <body> <div class="wrapper"> <div class="header"> This is the common header </div> <div class="nav"> This is the common nav </div> <div class="content"> <div class="twocol container2"> <div class="container1"> <div class="col1"> twocolumn column 1 </div> <div class="col2"> twocolumn column 2 </div> </div> </div> <div class="threecol container3"> <div class="container2"> <div class="container1"> <div class="col1"> threecol column 1 </div> <div class="col2"> threecol column 2 </div> <div class="col3"> threecol column 3 </div> </div> </div> </div> </div> <div class="footer"> This is the common footer </div> </div> </body> </html>

    Read the article

  • Dynamically creating a Generic Type at Runtime

    - by Rick Strahl
    I learned something new today. Not uncommon, but it's a core .NET runtime feature I simply did not know although I know I've run into this issue a few times and worked around it in other ways. Today there was no working around it and a few folks on Twitter pointed me in the right direction. The question I ran into is: How do I create a type instance of a generic type when I have dynamically acquired the type at runtime? Yup it's not something that you do everyday, but when you're writing code that parses objects dynamically at runtime it comes up from time to time. In my case it's in the bowels of a custom JSON parser. After some thought triggered by a comment today I realized it would be fairly easy to implement two-way Dictionary parsing for most concrete dictionary types. I could use a custom Dictionary serialization format that serializes as an array of key/value objects. Basically I can use a custom type (that matches the JSON signature) to hold my parsed dictionary data and then add it to the actual dictionary when parsing is complete. Generic Types at Runtime One issue that came up in the process was how to figure out what type the Dictionary<K,V> generic parameters take. Reflection actually makes it fairly easy to figure out generic types at runtime with code like this: if (arrayType.GetInterface("IDictionary") != null) { if (arrayType.IsGenericType) { var keyType = arrayType.GetGenericArguments()[0]; var valueType = arrayType.GetGenericArguments()[1]; … } } The GetArrayType method gets passed a type instance that is the array or array-like object that is rendered in JSON as an array (which includes IList, IDictionary, IDataReader and a few others). In my case the type passed would be something like Dictionary<string, CustomerEntity>. So I know what the parent container class type is. Based on the the container type using it's then possible to use GetGenericTypeArguments() to retrieve all the generic types in sequential order of definition (ie. string, CustomerEntity). That's the easy part. Creating a Generic Type and Providing Generic Parameters at RunTime The next problem is how do I get a concrete type instance for the generic type? I know what the type name and I have a type instance is but it's generic, so how do I get a type reference to keyvaluepair<K,V> that is specific to the keyType and valueType above? Here are a couple of things that come to mind but that don't work (and yes I tried that unsuccessfully first): Type elementType = typeof(keyvalue<keyType, valueType>); Type elementType = typeof(keyvalue<typeof(keyType), typeof(valueType)>); The problem is that this explicit syntax expects a type literal not some dynamic runtime value, so both of the above won't even compile. I turns out the way to create a generic type at runtime is using a fancy bit of syntax that until today I was completely unaware of: Type elementType = typeof(keyvalue<,>).MakeGenericType(keyType, valueType); The key is the type(keyvalue<,>) bit which looks weird at best. It works however and produces a non-generic type reference. You can see the difference between the full generic type and the non-typed (?) generic type in the debugger: The nonGenericType doesn't show any type specialization, while the elementType type shows the string, CustomerEntity (truncated above) in the type name. Once the full type reference exists (elementType) it's then easy to create an instance. In my case the parser parses through the JSON and when it completes parsing the value/object it creates a new keyvalue<T,V> instance. Now that I know the element type that's pretty trivial with: // Objects start out null until we find the opening tag resultObject = Activator.CreateInstance(elementType); Here the result object is picked up by the JSON array parser which creates an instance of the child object (keyvalue<K,V>) and then parses and assigns values from the JSON document using the types  key/value property signature. Internally the parser then takes each individually parsed item and adds it to a list of  List<keyvalue<K,V>> items. Parsing through a Generic type when you only have Runtime Type Information When parsing of the JSON array is done, the List needs to be turned into a defacto Dictionary<K,V>. This should be easy since I know that I'm dealing with an IDictionary, and I know the generic types for the key and value. The problem is again though that this needs to happen at runtime which would mean using several Convert.ChangeType() calls in the code to dynamically cast at runtime. Yuk. In the end I decided the easier and probably only slightly slower way to do this is a to use the dynamic type to collect the items and assign them to avoid all the dynamic casting madness: else if (IsIDictionary) { IDictionary dict = Activator.CreateInstance(arrayType) as IDictionary; foreach (dynamic item in items) { dict.Add(item.key, item.value); } return dict; } This code creates an instance of the generic dictionary type first, then loops through all of my custom keyvalue<K,V> items and assigns them to the actual dictionary. By using Dynamic here I can side step all the explicit type conversions that would be required in the three highlighted areas (not to mention that this nested method doesn't have access to the dictionary item generic types here). Static <- -> Dynamic Dynamic casting in a static language like C# is a bitch to say the least. This is one of the few times when I've cursed static typing and the arcane syntax that's required to coax types into the right format. It works but it's pretty nasty code. If it weren't for dynamic that last bit of code would have been a pretty ugly as well with a bunch of Convert.ChangeType() calls to litter the code. Fortunately this type of type convulsion is rather rare and reserved for system level code. It's not every day that you create a string to object parser after all :-)© Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET  CSharp   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • The Sensemaking Spectrum for Business Analytics: Translating from Data to Business Through Analysis

    - by Joe Lamantia
    One of the most compelling outcomes of our strategic research efforts over the past several years is a growing vocabulary that articulates our cumulative understanding of the deep structure of the domains of discovery and business analytics. Modes are one example of the deep structure we’ve found.  After looking at discovery activities across a very wide range of industries, question types, business needs, and problem solving approaches, we've identified distinct and recurring kinds of sensemaking activity, independent of context.  We label these activities Modes: Explore, compare, and comprehend are three of the nine recognizable modes.  Modes describe *how* people go about realizing insights.  (Read more about the programmatic research and formal academic grounding and discussion of the modes here: https://www.researchgate.net/publication/235971352_A_Taxonomy_of_Enterprise_Search_and_Discovery) By analogy to languages, modes are the 'verbs' of discovery activity.  When applied to the practical questions of product strategy and development, the modes of discovery allow one to identify what kinds of analytical activity a product, platform, or solution needs to support across a spread of usage scenarios, and then make concrete and well-informed decisions about every aspect of the solution, from high-level capabilities, to which specific types of information visualizations better enable these scenarios for the types of data users will analyze. The modes are a powerful generative tool for product making, but if you've spent time with young children, or had a really bad hangover (or both at the same time...), you understand the difficult of communicating using only verbs.  So I'm happy to share that we've found traction on another facet of the deep structure of discovery and business analytics.  Continuing the language analogy, we've identified some of the ‘nouns’ in the language of discovery: specifically, the consistently recurring aspects of a business that people are looking for insight into.  We call these discovery Subjects, since they identify *what* people focus on during discovery efforts, rather than *how* they go about discovery as with the Modes. Defining the collection of Subjects people repeatedly focus on allows us to understand and articulate sense making needs and activity in more specific, consistent, and complete fashion.  In combination with the Modes, we can use Subjects to concretely identify and define scenarios that describe people’s analytical needs and goals.  For example, a scenario such as ‘Explore [a Mode] the attrition rates [a Measure, one type of Subject] of our largest customers [Entities, another type of Subject] clearly captures the nature of the activity — exploration of trends vs. deep analysis of underlying factors — and the central focus — attrition rates for customers above a certain set of size criteria — from which follow many of the specifics needed to address this scenario in terms of data, analytical tools, and methods. We can also use Subjects to translate effectively between the different perspectives that shape discovery efforts, reducing ambiguity and increasing impact on both sides the perspective divide.  For example, from the language of business, which often motivates analytical work by asking questions in business terms, to the perspective of analysis.  The question posed to a Data Scientist or analyst may be something like “Why are sales of our new kinds of potato chips to our largest customers fluctuating unexpectedly this year?” or “Where can innovate, by expanding our product portfolio to meet unmet needs?”.  Analysts translate questions and beliefs like these into one or more empirical discovery efforts that more formally and granularly indicate the plan, methods, tools, and desired outcomes of analysis.  From the perspective of analysis this second question might become, “Which customer needs of type ‘A', identified and measured in terms of ‘B’, that are not directly or indirectly addressed by any of our current products, offer 'X' potential for ‘Y' positive return on the investment ‘Z' required to launch a new offering, in time frame ‘W’?  And how do these compare to each other?”.  Translation also happens from the perspective of analysis to the perspective of data; in terms of availability, quality, completeness, format, volume, etc. By implication, we are proposing that most working organizations — small and large, for profit and non-profit, domestic and international, and in the majority of industries — can be described for analytical purposes using this collection of Subjects.  This is a bold claim, but simplified articulation of complexity is one of the primary goals of sensemaking frameworks such as this one.  (And, yes, this is in fact a framework for making sense of sensemaking as a category of activity - but we’re not considering the recursive aspects of this exercise at the moment.) Compellingly, we can place the collection of subjects on a single continuum — we call it the Sensemaking Spectrum — that simply and coherently illustrates some of the most important relationships between the different types of Subjects, and also illuminates several of the fundamental dynamics shaping business analytics as a domain.  As a corollary, the Sensemaking Spectrum also suggests innovation opportunities for products and services related to business analytics. The first illustration below shows Subjects arrayed along the Sensemaking Spectrum; the second illustration presents examples of each kind of Subject.  Subjects appear in colors ranging from blue to reddish-orange, reflecting their place along the Spectrum, which indicates whether a Subject addresses more the viewpoint of systems and data (Data centric and blue), or people (User centric and orange).  This axis is shown explicitly above the Spectrum.  Annotations suggest how Subjects align with the three significant perspectives of Data, Analysis, and Business that shape business analytics activity.  This rendering makes explicit the translation and bridging function of Analysts as a role, and analysis as an activity. Subjects are best understood as fuzzy categories [http://georgelakoff.files.wordpress.com/2011/01/hedges-a-study-in-meaning-criteria-and-the-logic-of-fuzzy-concepts-journal-of-philosophical-logic-2-lakoff-19731.pdf], rather than tightly defined buckets.  For each Subject, we suggest some of the most common examples: Entities may be physical things such as named products, or locations (a building, or a city); they could be Concepts, such as satisfaction; or they could be Relationships between entities, such as the variety of possible connections that define linkage in social networks.  Likewise, Events may indicate a time and place in the dictionary sense; or they may be Transactions involving named entities; or take the form of Signals, such as ‘some Measure had some value at some time’ - what many enterprises understand as alerts.   The central story of the Spectrum is that though consumers of analytical insights (represented here by the Business perspective) need to work in terms of Subjects that are directly meaningful to their perspective — such as Themes, Plans, and Goals — the working realities of data (condition, structure, availability, completeness, cost) and the changing nature of most discovery efforts make direct engagement with source data in this fashion impossible.  Accordingly, business analytics as a domain is structured around the fundamental assumption that sense making depends on analytical transformation of data.  Analytical activity incrementally synthesizes more complex and larger scope Subjects from data in its starting condition, accumulating insight (and value) by moving through a progression of stages in which increasingly meaningful Subjects are iteratively synthesized from the data, and recombined with other Subjects.  The end goal of  ‘laddering’ successive transformations is to enable sense making from the business perspective, rather than the analytical perspective.Synthesis through laddering is typically accomplished by specialized Analysts using dedicated tools and methods. Beginning with some motivating question such as seeking opportunities to increase the efficiency (a Theme) of fulfillment processes to reach some level of profitability by the end of the year (Plan), Analysts will iteratively wrangle and transform source data Records, Values and Attributes into recognizable Entities, such as Products, that can be combined with Measures or other data into the Events (shipment of orders) that indicate the workings of the business.  More complex Subjects (to the right of the Spectrum) are composed of or make reference to less complex Subjects: a business Process such as Fulfillment will include Activities such as confirming, packing, and then shipping orders.  These Activities occur within or are conducted by organizational units such as teams of staff or partner firms (Networks), composed of Entities which are structured via Relationships, such as supplier and buyer.  The fulfillment process will involve other types of Entities, such as the products or services the business provides.  The success of the fulfillment process overall may be judged according to a sophisticated operating efficiency Model, which includes tiered Measures of business activity and health for the transactions and activities included.  All of this may be interpreted through an understanding of the operational domain of the businesses supply chain (a Domain).   We'll discuss the Spectrum in more depth in succeeding posts.

    Read the article

  • Run time insert using bulk update ,giving an internal error?

    - by Vineet
    Hi , I am trying to make a run time table named dynamic and inserting data into it from index by table using bulk update,but when i am trying to execute it this error is coming: ERROR at line 1: ORA-06550: line 0, column 0: PLS-00801: internal error [74301 ] declare type index_tbl_type IS table of number index by binary_integer; num_tbl index_tbl_type; TYPE ref_cur IS REF CURSOR; cur_emp ref_cur; begin execute immediate 'create table dynamic (v_num number)';--Creating a run time tabl FOR i in 1..10000 LOOP execute immediate 'insert into dynamic values('||i||')';--run time insert END LOOP; OPEN cur_emp FOR 'select * from dynamic';--opening ref cursor FETCH cur_emp bulk collect into num_tbl;--bulk inserting in index by table close cur_emp; FORALL i in num_tbl.FIRST..num_tbl.LAST --Bulk update execute immediate 'insert into dynamic values('||num_tbl(i)||')'; end;

    Read the article

  • Why learn Perl, Python, Ruby if the company is using C++, C# or Java as the application language?

    - by szabgab
    I wonder why would a C++, C#, Java developer want to learn a dynamic language? Assuming the company won't switch its main development language from C++/C#/Java to a dynamic one what use is there for a dynamic language? What helper tasks can be done by the dynamic languages faster or better after only a few days of learning than with the static language that you have been using for several years? Update After seeing the first few responses it is clear that there two issues. My main interest would be something that is justifiable to the employer as an expense. That is, I am looking for justifications for the employer to finance the learning of a dynamic language. Aside from the obvious that the employee will have broader view, the employers are usually looking for some "real" benefit.

    Read the article

  • Union,Except and Intersect operator in Linq

    - by Jalpesh P. Vadgama
    While developing a windows service using Linq-To-SQL i was in need of something that will intersect the two list and return a list with the result. After searching on net i have found three great use full operators in Linq Union,Except and Intersect. Here are explanation of each operator. Union Operator: Union operator will combine elements of both entity and return result as third new entities. Except Operator: Except operator will remove elements of first entities which elements are there in second entities and will return as third new entities. Intersect Operator: As name suggest it will return common elements of both entities and return result as new entities. Let’s take a simple console application as  a example where i have used two string array and applied the three operator one by one and print the result using Console.Writeline. Here is the code for that. C#, using GeSHi 1.0.8.6 using System; using System.Collections.Generic; using System.Linq; using System.Text;     namespace ConsoleApplication1 {     class Program     {         static void Main(string[] args)         {             string[] a = { "a", "b", "c", "d" };             string[] b = { "d","e","f","g"};               var UnResult = a.Union(b);             Console.WriteLine("Union Result");               foreach (string s in UnResult)             {                 Console.WriteLine(s);                          }               var ExResult = a.Except(b);             Console.WriteLine("Except Result");             foreach (string s in ExResult)             {                 Console.WriteLine(s);             }               var InResult = a.Intersect(b);             Console.WriteLine("Intersect Result");             foreach (string s in InResult)             {                 Console.WriteLine(s);             }             Console.ReadLine();                        }          } }   Parsed in 0.022 seconds at 45.54 KB/s Here is the output of console application as Expected. Hope this will help you.. Technorati Tags: Linq,Except,InterSect,Union,C#

    Read the article

  • How a "Collision System" should be implemented?

    - by nathan
    My game is written using a entity system approach using Artemis Framework. Right know my collision detection is called from the Movement System but i'm wondering if it's a proper way to do collision detection using such an approach. Right know i'm thinking of a new system dedicated to collision detection that would proceed all the solid entities to check if they are in collision with another one. I'm wondering if it's a correct way to handle collision detection with an entity system approach? Also, how should i implement this collision system? I though of an IntervalEntitySystem that would check every 200ms (this value is chosen regarding the Artemis documentation) if some entities are colliding. protected void processEntities(ImmutableBag<Entity> ib) { for (int i = 0; i < ib.size(); i++) { Entity e = ib.get(i); //check of collision with other entities here } }

    Read the article

  • Converting LINQ to Twitter to Twitter API v1.1

    - by Joe Mayo
    Twitter recently updated their API to v1.1 (Current status: API v1.1). Naturally, LINQ to Twitter  needed to be updated too. This blog post outlines the changes made to LINQ to Twitter during this conversion and highlights important features that LINQ to Twitter developers will want to know. Overall Impact Generally speaking, Twitter API v1.1 is semantically very much the same as it’s predecessor. The base URL changed and so did a few resource segments, but the resources themselves are still intact. The good news is that LINQ to Twitter has always shielded the developer from this plumbing, so the entities, types, and filters didn’t change much at all.  The following sections describe what did  change. Authentication In Twitter API v1.0 authentication was not required for some resources, such as user timelines and search. However, that’s all changed because *all* queries must be authenticated in Twitter API v1.1. LINQ to Twitter has various types of authorizers you can use, supporting whatever OAuth options are available via Twitter.  You can see the LINQ to Twitter documentation, Securing Your Applications, for more info on OAuth support. The New Search One of the larger changes to the API was Search. To be more specific, the Search entity now contains a List<Status>, named Statuses, to hold results.  Additionally, any meta-data associated with the search is now in a property named SearchMetaData. The change to the Search entity and responses is the big change, but the good news is that your Search query syntax doesn’t change. Different Rate Limits The issue of rate limits itself is contentious, but this discussion is focused on the coding experience and I’ll leave the politics to those who prefer to engage in that activity. What’s important here is that both headers and resources have changed. You should review Twitter’s Rate Limit documentation to understand what the changes mean.  A quick explanation is that rate limits are applied individually to each resource in 15 minute time intervals. In LINQ to Twitter these changes surface on the Help entity, via HelpType.RateLimits. The RateLimits query has a Resources filter where you can specify a comma-separated list of categories to return rate limit info for.  The results materialize in the RateLimits dictionary, keyed on category. The Help entity also has a RateLimitsAuthorizationContext, holding the Access Token for the user performing queries – and to whom the rate limits apply. In addition to the new RateLimits query, there are new RateLimit headers that appear in the query response, whose HTTP header name is of the form X-Rate-Limit… which is different from the previous header name. LINQ to Twitter surfaces these headers via the existing properties of the TwitterContext instance. For anyone who retrieved rate limit information via the Headers property of TwitterContext, you should be aware of the new header names.  I haven’t done anything with Feature rate limit properties yet, but they appear to no longer be available – this will require more follow-up. Error Handling Twitter API v1.1 has a new format for Error Codes & Responses. LINQ to Twitter wraps these messages in the TwitterQueryException, which has been updated appropriately. The Message property of TwitterQueryException now reflects the Twitter error message, when available. There’s also a new ErrorCode that’s populated with the message error code. Parameters Most parameters stayed the same, but one of interest is Include Entities (different from LINQ to Twitter data object entities). Entities are metadata hanging off tweets, that provide start/end position in the tweet and other information for mentions, urls, hash tags, and media. Entities used to not be included unless you specified you wanted them. Now, in v1.1, entities are included by default for all APIs that return a Status.  If you were always setting IncludeEntities to true, then you won’t see a change. However, be aware that you’ll now be receiving additional data in your response from Twitter, which will explain a sudden increase in bandwidth utilization. This might or might not  matter to you  depending on the requirements of your application, but you should be aware of it. Everything Else There might be small changes here and there that I haven’t mentioned, but these were the ones you should be most aware of.  Streams didn’t change, but Twitter will be deprecating username/password authentication on public streams, in favor of OAuth, so you’ll be seeing me make that change some time in the future.  Also, Twitter will continue to evolve the API and you can expect that LINQ to Twitter will change accordingly. Summary The big changes to Twitter API were Authentication, Search, Rate Limits, and Error Handling. All API calls must be authenticated. You’ll need to change your code to read Search results differently, but the query is much the same as you use now. There’s a new RateLimits API, one of the Help queries.  Also, the new error messages are integrated into TwitterQueryException. Besides these changes, I expect  most others to be small or affect a smaller percentage of developers.  You can get the latest version of LINQ to Twitter from NuGet or visit the LINQ to Twitter download page at CodePlex.com.   @JoeMayo

    Read the article

  • Rendering order in an Entity System

    - by Daedalus
    Say I use a basic ES approach, and also inside Systems I hold lists of all entities that Systems are required to process. How do I maintain this list of entities in desired rendering order, i.e. for a dumb 2D RenderingSystem? I saw this discussion, and what they suggest is to do something like Z ordering - what I would probably do is just to store a "layer" int in DrawableComponent and then, inside RenderingSystem, just sort entities by mentioned "layer" whenever the entity list for RenderingSystem changes. They also say we could just delete and recreate the entity whenever we want it on the top, but it seems too inflexible to me. How is this problem usually solved?

    Read the article

  • Point me to info about constructing filters (of lists)

    - by jah
    I would like some pointers to information which would help me understand how to go about providing the ability to filter a list of entities by their attributes as well as by attributes of related entities. As an example, imagine a web app which provides order management of some kind. Orders and related entities are stored in a relational database. And imagine that the app has an interface which lists the orders. The problem is: how does one allow the list to be filtered by, for example:- order number (an attribute) line item name (an attribute of a n-n related entity) some text in an administrative note related to the order (text found in an attribute of a 1-1 related entity) I'm trying to discover whether there is something like a standard, efficient way to construct the queries and the filtering form; or some possible strategies; or any theory on the topic; or some example code. My google foo fails me.

    Read the article

  • What would one call this architecture?

    - by Chris
    I have developed a distributed test automation system which consists of two different entities. One entity is responsible for triggering tests runs and monitoring/displaying their progress. The other is responsible for carrying out tests on that host. Both of these entities retrieve data from a central DB. Now, my first thought is that this is clearly a server-client architecture. After all, you have exactly one organizing entity and many entities that communicate with said entity. However, while the supposed clients to communicate to the server via RPC, they are not actually requesting services or information, rather they are simply reporting back test progress, in fact, once the test run has been triggered they can complete their tasks without connection to the server. The request for a service is actually made by the supposed server which triggers the clients to carry out tests. So would this still be considered a server-client architecture or is this something different?

    Read the article

  • Playing a sound on collision?

    - by Eric McLoughlin
    I'm making a pool game. I've separated the gameplay logic and the physics into two systems, entities and physics. Each entity holds a reference to a body which the physics system uses. The body itself holds a reference back to it's owner. When an entity collides with another entity, the Collided(Entity other) method is called on both. What I'm trying to do now is to play a sound when both entities colliding are of a certain subclass. I'm not sure how to do that. I could do it in the Collided method, but then the sound would be played two times at the same time, since the method was called on both entities. How do you suggest I do this?

    Read the article

  • What would one call this architecture?

    - by Chris
    I have developed a distributed test automation system which consists of two different entities. One entity is responsible for triggering tests runs and monitoring/displaying their progress. The other is responsible for carrying out tests on that host. Both of these entities retrieve data from a central DB. Now, my first thought is that this is clearly a server-client architecture. After all, you have exactly one organizing entity and many entities that communicate with said entity. However, while the supposed clients to communicate to the server via RPC, they are not actually requesting services or information, rather they are simply reporting back test progress, in fact, once the test run has been triggered they can complete their tasks without connection to the server. The request for a service is actually made by the supposed server which triggers the clients to carry out tests. So would this still be considered a server-client architecture or is this something different?

    Read the article

  • PHP extensions won't load on Apache startup

    - by WebDevHobo
    I've added php as a module for Apache 2.2.11: LoadModule php5_module "c:/php/php5apache2_2.dll" And also added AddType application/x-httpd-php .php And in PHP.ini, my extension dir is set to: extension_dir = "C:\php\ext" And yes, the directories are correct and all files do exist. But when I start apache, I get these errors: PHP Warning: PHP Startup: Unable to load dynamic library 'C:\php\ext\php_mysql.dll' - The specified module could not be found.\r\n in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library 'C:\php\ext\php_pdo_pgsql.dll' - The specified module could not be found.\r\n in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library 'C:\php\ext\php_pgsql.dll' - The specified module could not be found.\r\n in Unknown on line 0 [Sun May 17 03:46:01 2009] [notice] Apache/2.2.11 (Win32) PHP/5.2.9-2 configured -- resuming normal operations [Sun May 17 03:46:01 2009] [notice] Server built: Dec 10 2008 00:10:06 [Sun May 17 03:46:01 2009] [notice] Parent: Created child process 4652 PHP Warning: PHP Startup: Unable to load dynamic library 'C:\php\ext\php_mysql.dll' - The specified module could not be found.\r\n in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library 'C:\php\ext\php_pdo_pgsql.dll' - The specified module could not be found.\r\n in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library 'C:\php\ext\php_pgsql.dll' - The specified module could not be found.\r\n in Unknown on line 0 [Sun May 17 03:46:01 2009] [notice] Child 4652: Child process is running [Sun May 17 03:46:01 2009] [notice] Child 4652: Acquired the start mutex. [Sun May 17 03:46:01 2009] [notice] Child 4652: Starting 64 worker threads. [Sun May 17 03:46:01 2009] [notice] Child 4652: Starting thread to listen on port 80. So I'm probably forgetting something simple here, can someone tell me what I'm forgetting?

    Read the article

  • Grouping Categorized Data In WPF.

    - by VoidDweller
    Here is what I am trying to do. Dynamic Category: Columns can be 0 or more. Must contain 1 or more Type Columns. Will only be displayed if any row contains Type Column data associated with it. Data Rows: Will be added Asynchronously. Will be grouped by a Common Category column. Will add a Dynamic Category if it does not yet exist. Will add a Type Column if it does not yet exist within its appropriate Dynamic Category. Platform Info: WPF .Net 3.5 sp1 C# MVVM I have a few partially functional prototypes, but each has it's own major set of problems. Can any of you give me some guidance on this? Envision this nicely styled. :-) -------------------------------------------------------------------------- |[ Common Category ]|[ Dynamic Category 0 ]|[ Dynamic Category N ]| -------------------------------------------------------------------------- |[Header 1]|[Header 2]|[ Type 0 ]|[ Type N ]|[ Type 0 ]|[ Type N ]| -------------------------------------------------------------------------- |[Data 2 Group] | -------------------------------------------------------------------------- | Data A | Data 2 || Null | Data 1 || Data 0 | Data 1 || | Data B | Data 2 || Data 0 | Null || Data 0 | Data 1 || -------------------------------------------------------------------------- |[Data 1 Group] | -------------------------------------------------------------------------- | Data C | Data 1 || Null | Data 1 || Data 0 | Data 1 || | Data D | Data 1 || Null | Null || Data 0 | Null || -------------------------------------------------------------------------- Edit: Sorting and Paging is not necessary. I have looked at nested ListViews and DataGrids, dynamically building a Grid. Dynamically building a Grid and leveraging the SharedSizeGroup property seems the most promising strategy, but I am concerned about performance. Would a better approach be to consider this a dynamic report? If so, what should I be looking at? Thanks for your help.

    Read the article

  • Entity Framework - Single EMDX Mapping Multiple Database

    - by michaelalisonalviar
    Because of my recent craze on Entity Framework thanks to Sir Humprey, I have continuously searched the Internet for tutorials on how to apply it to our current system. So I've come to learn that with EF, I can eliminate the numerous coding of methods/functions for CRUD operations, my overly used assigning of connection strings, Data Adapters or Data Readers as Entity Framework will map my desired database and will do its magic to create entities for each table I want (using EF Powertool) and does all the methods/functions for my Crud Operations. But as I begin applying it to a new project I was assigned to, I realized our current server is designed to contain each similar entities in different databases. For example Our lookup tables are stored in LookupDb, Accounting-related tables are in AccountingDb, Sales-related tables in SalesDb. My dilemma is I have to use an existing table from LookupDB and use it as a look-up for my new table. Then I have found Miss Rachel's Blog (here)Thank You Miss Rachel!  which enables me to let EF think that my TableLookup1 is in the AccountingDB using the following steps. Im on VS 2010, I am using C# , Using Entity Framework 5, SQL Server 2008 as our DB ServerStep 1:Creating A SQL Synonym. If you want a more detailed discussion on synonyms, this was what i have read -> (link here). To simply put it, A synonym enabled me to simplify my query for the Look-up table when I'm using the AccountingDB fromSELECT [columns] FROM LookupDB.dbo.TableLookup1toSELECT [columns] FROM TableLookup1Syntax: CREATE SYNONYM  TableLookup1(1) FOR LookupDB.dbo.TableLookup1 (2)1. What you want to call the table on your other DB2. DataBaseName.schema.TableNameStep 2: We will now follow Miss Rachel's steps. you can either visit the link on the original topic I posted earlier or just follow the step I made.1. I created a Visual Basic Solution that will contain the 4 projects needed to complete the merging2. First project will contain the edmx file pointing to the AccountingDB3. Second project will contain the edmx file pointing to the LookupDB4. Third Project will will be our repository of the merged edmx file. Create an edmx file pointing To AccountingDB as this the database that we created the Synonym on.Reminder: Aside from using the same name for the Entities, please make sure that you have the same Model Namespace for all your Entities  5. Fourth project that will contain the beautiful EDMX merger that Miss Rachel created that will free you from Hard coding of the merge/recoding the Edmx File of the third project everytime a change is done on either one of the first two projects' Edmx File.6. Run the solution, but make sure that on the solutions properties Single startup project is selected and the project containing the EDMX merger is selected.7. After running the solution, double click on the EDMX file of the 3rd project and set Lazy Loading Enabled = False. This will let you use the tables/entities that you see in that EDMX File.8. Feel free to do your CRUD Operations.I don't know if EF 5 already has a feature to support synonyms as I am still a newbie on that aspect but I have seen a linked where there are supposed suggestions on Entity Framework upgrades and one is the "Support for multiple databases"  So that's it! Thanks for reading!

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >