Search Results

Search found 16914 results on 677 pages for 'single threaded'.

Page 595/677 | < Previous Page | 591 592 593 594 595 596 597 598 599 600 601 602  | Next Page >

  • ResourceSerializable: an alternate to ORM and ActiveRecord

    - by Levi Morrison
    A few opinionated reasons I don't like the traditional ORM and ActiveRecord patterns: They work only with a database. Sometimes I'm dealing with objects from an API and other objects from a database. All the implementations I have seen don't allow for that. Feel free to clue me in if I'm wrong on this. They are brittle. Changes in the database will likely break your implemenation. Some implementations can help reduce this, but a few of the ones I've seen don't. Their very design is influenced by the database. If I want to switch to using an API, I'll have to redesign the object to get it to work (likely). It seems to violate the single-responsibility pattern. They know what they are and how they act, but they also know how they are created, destroyed and saved? Seems a bit much. What about an approach that is somewhat more familiar in PHP: implementing an interface? In php 5.4, we'll have the JsonSerializable interface that defines the data to be json_encoded, so users will become accustomed to this type of thing. What if there was a ResourceSerializable interface? This is still an ORM by name, but certainly not by tradition. interface ResourceSerializable { /** * Returns the id that identifies the resource. */ function resourceId(); /** * Returns the 'type' of the resource. */ function resourceType(); /** * Returns the data to be serialized. */ function resourceSerialize(); } Things might be poorly named, I'll take suggestions. Notes: ResourceId will work for API's and databases. As long as your primary key in the database is the same as the resource ID in the API, there is no conflict. All of the API's I've worked with have a unique ID for the resource, so I don't see any issues there. ResourceType is the group or type associated with the resource. You can use this to map the resource to an API call or a database table. If the ResourceType was person, it could map to /api/1/person/{resourceId} and the table persons (or people, if it's smart enough). resourceSerialize() returns the data to be stored. Keys would identify API parameters and database table columns. This also seems easier to test than ActiveRecord / Orm implemenations. I haven't done much automated testing on traditional ActiveRecord/ORM implemenations, so this is merely a guess. But it seems that I being able to create objects independently of the library helps me. I don't have to use load() to get an existing resource, I can simply create one and set all the right properties. This is not so easy in the ActiveRecord / Orm implemenations I've dealt with. Downsides: You need another object to serialize it. This also means you have more code in general as you have to use more objects. You have to map resource types to API calls and database tables. This is even more work, but some ORMs and ActiveRecord implementations require you to map objects to table names anyway. Are there other downsides that you see? Does this seem feasible to you? How would you improve it? Note: I almost asked this on StackOverflow because it might be too vague for their standards, but I'm still not really familiar with programmers.stackexchange.com, so please help me improve my question if it doesn't shape up to standards here.

    Read the article

  • Using Oracle Linux iSCSI targets with Oracle VM

    - by wim.coekaerts
    A few days ago I had written a blog entry on how to use Oracle Solaris 10 (in my case), ZFS and the iSCSI target feature in Oracle Solaris to create a set of devices exported to my Oracle VM server. Oracle Linux can do this as well and I wanted to make sure I also tried out how to do this on Oracle Linux and here are the results. When you install Oracle Linux 5 update 5 (anything newer than update 3), it comes with an rpm called scsi-target-utils. To begin your quest, should you choose to accept it :) make sure this is installed. rpm -qa |grep scsi-target If it is not installed : up2date scsi-target-utils The target utils come with a tool tgtadm which is similar to iscsitadm on Oracle Solaris. There are 2 components again on the iSCSI server side. (1) create volumes - we will use lvm with lvcreate (2) expose a target using tgtadm. My server has a simple setup. All the disks are part of a single volume group called vgroot. To export a 50Gb volume I just create a new volume : lvcreate -L 50G -nmytest1 vgroot This will show up as a new volume in /dev/mapper as /dev/mapper/vgroot-mytest1. Create as many as you want for your environment. Since I already have my blog entry about the 5 volumes, I am not going to repeat the whole thing. You can just go look at the previous blog entry. Now that we have created the volume, we need to use tgtadm to set it up : make sure the service is running : /etc/init.d/tgtd start or service tgtd start (if you want to keep it running you can do chkconfig tgtd on to start it automatically at boottime) Next you need a targetname to set everything up. My recommendation would be to install iscsi-initiator-utils . This will create an iscsi id and put it in /etc/iscsi/initiatorname.iscsi. For convenience you can do : source /etc/iscsi/initiatorname.iscsi echo $InitiatorName and from here on use $InitiatorName instead of the long complex iqn. create your target : tgtadm --lld iscsi --op new --mode target --tid 1 -T $InitiatorName to show the status : tgtadm --lld iscsi --op show --mode target add the volume previously created : tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 -b /dev/mapper/vgroot-mytest1 re-run status to see it's there : tgtadm --lld iscsi --op show --mode target and just like on Oracle Solaris you now have to export (bind) it : tgtadm --lld iscsi --op bind --mode target --tid 1 -I iqn.1986-03.com.sun:01:2a7526f0ffff If you want to export the lun to every iscsi initiator then replace the iqn with ALL. Of course you have to add the iqn of each iscsi initiator or client you want to connect. In the case of my 2 node Oracle VM server setup, both Oracle VM server's initiator names would have to be added. use status again to see that it has this iqn under ACL tgtadm --lld iscsi --op show --mode target You can drop the --lld iscsi if you want, or alias it. It just makes the command line more obvious as to what you are doing. Oracle VM side : Refer back to the previous blog entry for the detailed setup of my Oracle VM server volumes but the exact same commands will be used there. discover : iscsiadm --mode discovery --type sendtargets --portal login : iscsiadm --mode node --targetname iscsi targetname --portal --login get devices : /etc/init.d/iscsi restart and voila you should be in business. have fun.

    Read the article

  • #altnetseattle &ndash; REST Services

    - by GeekAgilistMercenary
    Below are the notes I made in the REST Architecture Session I helped kick off with Andrew. RSS, ATOM, and such needed for better discovery.  i.e. there still is a need for some type of discovery. Difficult is modeling behaviors in a RESTful way.  ??  Invoking some type of state against an object.  For instance in the case of a POST vs. a GET.  The GET is easy, comes back as is, but what about a POST, which often changes some state or something. Challenge is doing multiple workflows with stateful workflows.  How does batch work.  Maybe model the batch as a resource. Frameworks aren’t particularly part of REST, REST is REST.  But point argued that REST is modeled, or part of modeling a state machine of some sort… ? Nothing is 100% reliable w/ REST – comparisons drawn with TCP/IP.  Sufficient probability is made however for the communications, but the idea of a possible failure has to be built into the usage model of REST. Ruby on Rails / RESTfully, and others used.  What were their issues, what do they do.  ATOM feeds, object serialized, using LINQ to XML w/ this.  No state machine libraries. Idempotent areas around REST and single change POST changes are inherent in the architecture. REST – one of the constrained languages is for the interaction w/ the system.  Limiting what can be done on the resources.  - disagreement, there is no agreed upon REST verbs. Sam Ruby – RESTful services.  Expanded the verbs within REST/HTTP pushes you off the web.  Of the existing verbs POST leaves the most up for debate. Robert Reem used Factory to deal with the POST to handle the new state.  The POST identifying what it just did by the return. Different states are put into POST, so that new prospective verbs, without creating verbs for REST/HTTP can be used to advantage without breaking universal clients. Biggest issue with REST services is their lack of state, yet it is also one of their biggest strengths.  What happens is that the client takes up the often onerous task of handling all state, state machines, and other extraneous resource management.  All the GETs, POSTs, DELETEs, INSERTs get all pushed into abstraction.  My 2 cents is that this in a way ends up pushing a huge proprietary burden onto the REST services often removing the point of REST to be simple and to the point. WADL does provide discovery and some state control (sort of?) Statement made, "WADL" isn't needed.  The JSON, XML, or other client side returned data handles this. I then applied the law of 2 feet rule for myself and headed to finish up these notes, post to the Wiki, and figure out what I was going to do next.  For the original Wiki entry check it out here. I will be adding more to this post with a subsequent post.  Please do feel free to post your thoughts and ideas about this, as I am sure everyone in the session will have more for elaboration.

    Read the article

  • What's up with LDoms: Part 9 - Direct IO

    - by Stefan Hinker
    In the last article of this series, we discussed the most general of all physical IO options available for LDoms, root domains.  Now, let's have a short look at the next level of granularity: Virtualizing individual PCIe slots.  In the LDoms terminology, this feature is called "Direct IO" or DIO.  It is very similar to root domains, but instead of reassigning ownership of a complete root complex, it only moves a single PCIe slot or endpoint device to a different domain.  Let's look again at hardware available to mars in the original configuration: root@sun:~# ldm ls-io NAME TYPE BUS DOMAIN STATUS ---- ---- --- ------ ------ pci_0 BUS pci_0 primary pci_1 BUS pci_1 primary pci_2 BUS pci_2 primary pci_3 BUS pci_3 primary /SYS/MB/PCIE1 PCIE pci_0 primary EMP /SYS/MB/SASHBA0 PCIE pci_0 primary OCC /SYS/MB/NET0 PCIE pci_0 primary OCC /SYS/MB/PCIE5 PCIE pci_1 primary EMP /SYS/MB/PCIE6 PCIE pci_1 primary EMP /SYS/MB/PCIE7 PCIE pci_1 primary EMP /SYS/MB/PCIE2 PCIE pci_2 primary EMP /SYS/MB/PCIE3 PCIE pci_2 primary OCC /SYS/MB/PCIE4 PCIE pci_2 primary EMP /SYS/MB/PCIE8 PCIE pci_3 primary EMP /SYS/MB/SASHBA1 PCIE pci_3 primary OCC /SYS/MB/NET2 PCIE pci_3 primary OCC /SYS/MB/NET0/IOVNET.PF0 PF pci_0 primary /SYS/MB/NET0/IOVNET.PF1 PF pci_0 primary /SYS/MB/NET2/IOVNET.PF0 PF pci_3 primary /SYS/MB/NET2/IOVNET.PF1 PF pci_3 primary All of the "PCIE" type devices are available for SDIO, with a few limitations.  If the device is a slot, the card in that slot must support the DIO feature.  The documentation lists all such cards.  Moving a slot to a different domain works just like moving a PCI root complex.  Again, this is not a dynamic process and includes reboots of the affected domains.  The resulting configuration is nicely shown in a diagram in the Admin Guide: There are several important things to note and consider here: The domain receiving the slot/endpoint device turns into an IO domain in LDoms terminology, because it now owns some physical IO hardware. Solaris will create nodes for this hardware under /devices.  This includes entries for the virtual PCI root complex (pci_0 in the diagram) and anything between it and the actual endpoint device.  It is very important to understand that all of this PCIe infrastructure is virtual only!  Only the actual endpoint devices are true physical hardware. There is an implicit dependency between the guest owning the endpoint device and the root domain owning the real PCIe infrastructure: Only if the root domain is up and running, will the guest domain have access to the endpoint device. The root domain is still responsible for resetting and configuring the PCIe infrastructure (root complex, PCIe level configurations, error handling etc.) because it owns this part of the physical infrastructure. This also means that if the root domain needs to reset the PCIe root complex for any reason (typically a reboot of the root domain) it will reset and thus disrupt the operation of the endpoint device owned by the guest domain.  The result in the guest is not predictable.  I recommend to configure the resulting behaviour of the guest using domain dependencies as described in the Admin Guide in Chapter "Configuring Domain Dependencies". Please consult the Admin Guide in Section "Creating an I/O Domain by Assigning PCIe Endpoint Devices" for all the details! As you can see, there are several restrictions for this feature.  It was introduced in LDoms 2.0, mainly to allow the configuration of guest domains that need access to tape devices.  Today, with the higher number of PCIe root complexes and the availability of SR-IOV, the need to use this feature is declining.  I personally do not recommend to use it, mainly because of the drawbacks of the depencies on the root domain and because it can be replaced with SR-IOV (although then with similar limitations). This was a rather short entry, more for completeness.  I believe that DIO can usually be replaced by SR-IOV, which is much more flexible.  I will cover SR-IOV in the next section of this blog series.

    Read the article

  • problem with frustum AABB culling in DirectX

    - by Matthew Poole
    Hi, I am currently working on a project with a few friends, and I am trying to get frustum culling working. Every single tutorial or article I go to shows that my math is correct and that this should be working. I thought maybe posting here, somebody would catch something I could not. Thank you. Here are the important code snippets /create the projection matrix void CD3DCamera::SetLens(float fov, float aspect, float nearZ, float farZ) { D3DXMatrixPerspectiveFovLH(&projMat, D3DXToRadian(fov), aspect, nearZ, farZ); } //build the view matrix after changes have been made to camera void CD3DCamera::BuildView() { //keep axes orthoganal D3DXVec3Normalize(&look, &look); //up D3DXVec3Cross(&up, &look, &right); D3DXVec3Normalize(&up, &up); //right D3DXVec3Cross(&right, &up, &look); D3DXVec3Normalize(&right, &right); //fill view matrix float x = -D3DXVec3Dot(&position, &right); float y = -D3DXVec3Dot(&position, &up); float z = -D3DXVec3Dot(&position, &look); viewMat(0,0) = right.x; viewMat(1,0) = right.y; viewMat(2,0) = right.z; viewMat(3,0) = x; viewMat(0,1) = up.x; viewMat(1,1) = up.y; viewMat(2,1) = up.z; viewMat(3,1) = y; viewMat(0,2) = look.x; viewMat(1,2) = look.y; viewMat(2,2) = look.z; viewMat(3,2) = z; viewMat(0,3) = 0.0f; viewMat(1,3) = 0.0f; viewMat(2,3) = 0.0f; viewMat(3,3) = 1.0f; } void CD3DCamera::BuildFrustum() { D3DXMATRIX VP; D3DXMatrixMultiply(&VP, &viewMat, &projMat); D3DXVECTOR4 col0(VP(0,0), VP(1,0), VP(2,0), VP(3,0)); D3DXVECTOR4 col1(VP(0,1), VP(1,1), VP(2,1), VP(3,1)); D3DXVECTOR4 col2(VP(0,2), VP(1,2), VP(2,2), VP(3,2)); D3DXVECTOR4 col3(VP(0,3), VP(1,3), VP(2,3), VP(3,3)); // Planes face inward frustum[0] = (D3DXPLANE)(col2); // near frustum[1] = (D3DXPLANE)(col3 - col2); // far frustum[2] = (D3DXPLANE)(col3 + col0); // left frustum[3] = (D3DXPLANE)(col3 - col0); // right frustum[4] = (D3DXPLANE)(col3 - col1); // top frustum[5] = (D3DXPLANE)(col3 + col1); // bottom // Normalize the frustum for( int i = 0; i < 6; ++i ) D3DXPlaneNormalize( &frustum[i], &frustum[i] ); } bool FrustumCheck(D3DXVECTOR3 max, D3DXVECTOR3 min, const D3DXPLANE* frustum) { // Test assumes frustum planes face inward. D3DXVECTOR3 P; D3DXVECTOR3 Q; bool ret = false; for(int i = 0; i < 6; ++i) { // For each coordinate axis x, y, z... for(int j = 0; j < 3; ++j) { // Make PQ point in the same direction as the plane normal on this axis. if( frustum[i][j] > 0.0f ) { P[j] = min[j]; Q[j] = max[j]; } else { P[j] = max[j]; Q[j] = min[j]; } } if(D3DXPlaneDotCoord(&frustum[i], &Q) < 0.0f ) ret = false; } return true; }

    Read the article

  • What is the right way to process inconsistent data files?

    - by Tahabi
    I'm working at a company that uses Excel files to store product data, specifically, test results from products before they are shipped out. There are a few thousand spreadsheets with anywhere from 50-100 relevant data points per file. Over the years, the schema for the spreadsheets has changed significantly, but not unidirectionally - in the sense that, changes often get reverted and then re-added in the space of a few dozen to few hundred files. My project is to convert about 8000 of these spreadsheets into a database that can be queried. I'm using MongoDB to deal with the inconsistency in the data, and Python. My question is, what is the "right" or canonical way to deal with the huge variance in my source files? I've written a data structure which stores the data I want for the latest template, which will be the final template used going forward, but that only helps for a few hundred files historically. Brute-forcing a solution would mean writing similar data structures for each version/template - which means potentially writing hundreds of schemas with dozens of fields each. This seems very inefficient, especially when sometimes a change in the template is as little as moving a single line of data one row down or splitting what used to be one data field into two data fields. A slightly more elegant solution I have in mind would be writing schemas for all the variants I can find for pre-defined groups in the source files, and then writing a function to match a particular series of files with a series of variants that matches that set of files. This is because, more often that not, most of the file will remain consistent over a long period, only marred by one or two errant sections, but inside the period, which section is inconsistent, is inconsistent. For example, say a file has four sections with three data fields, which is represented by four Python dictionaries with three keys each. For files 7000-7250, sections 1-3 will be consistent, but section 4 will be shifted one row down. For files 7251-7500, 1-3 are consistent, section 4 is one row down, but a section five appears. For files 7501-7635, sections 1 and 3 will be consistent, but section 2 will have five data fields instead of three, section five disappears, and section 4 is still shifted down one row. For files 7636-7800, section 1 is consistent, section 4 gets shifted back up, section 2 returns to three cells, but section 3 is removed entirely. Files 7800-8000 have everything in order. The proposed function would take the file number and match it to a dictionary representing the data mappings for different variants of each section. For example, a section_four_variants dictionary might have two members, one for the shifted-down version, and one for the normal version, a section_two_variants might have three and five field members, etc. The script would then read the matchings, load the correct mapping, extract the data, and insert it into the database. Is this an accepted/right way to go about solving this problem? Should I structure things differently? I don't know what to search Google for either to see what other solutions might be, though I believe the problem lies in the domain of ETL processing. I also have no formal CS training aside from what I've taught myself over the years. If this is not the right forum for this question, please tell me where to move it, if at all. Any help is most appreciated. Thank you.

    Read the article

  • How does the Trash Can work, and where can I find official documentation, reference, or specification for it?

    - by MestreLion
    When trying to manage trash can from mounted NTFS volumes, I ended up reading FreeDesktop.org's reference on it. Poking around and doing some tests, I realized Ubuntu/Gnome does not follow the specs 100%. Here's why: For non-/ partitions, it always uses <driveroot>/.Trash-<uid>, It never used <driveroot>/.Trash/<uid>, even when i created it in advance. While this works, it's annoying: if I have 15 users, I end up with 15 /.Trash-xxx folders in my drive, while the other approach would still give a single folder (with 15 sub-folders). That "pollution" in my drives is very unpleasant. And specs say "If an $topdir/.Trash directory is absent, an $topdir/.Trash-$uid directory is to be used". Well, it IS present, so why does it never use it? root trash does not work, at least not out of the box. Open nautilus as root and click on trash; it gives an error. Try to delete any file, it says "it can't move to trash". Ok, I know this can be fixed by creating /root/.local/share. But specs says "A “home trash” directory SHOULD be automatically created for any new user. If this directory is needed for a trashing operation but does not exist, the implementation SHOULD automatically create it, without any warnings or delays.". Why the error then? Bug? Why must I change /etc/fstab entries for mounted volumes, adding options like uid and guid, if the volumes are already mounted as RW for everyone? These are just some examples of deviation from the standard. So, the question is: "If Ubuntu does not adhere 100% to the spec, HOW exactly does the trash work? WHERE can i find a technical reference for Ubuntu's implementation of the trash?" By the way: if Ubuntu does happen to follow specs, please tell me what I am doing wrong, especially regarding the /.Trash-<uid> vs /.Trash/<uid> issue. Thanks! EDIT: Some more info: If a given fs has no support for the sticky bit (VFAT, NTFS), it probably doesn't have for permissions either (at least VFAT surely doesn't). So what prevents one user from purging / restoring other users' ./Trash-xxx ? If one can read/write his own Trash, one can do the same for the whole drive, including other's trashes, correct? Or does Gnome have some kind of "extra" protection on ./Trash-xxx folders on VFAT/NTFS fs? If Linux can "emulate" file permissions on NTFS mounting by editing /fstab uid and gid options, can it also "emulate" the sticky bit? I would really prefer to use /.Trash/xxx format... For the root issue: for the / partition, I can use trash as root, and it goes to /root/.local/shate/Trash. But if I click on Nautilus "Trash" (as root), I get an error. Don't you? So files are correctly trashed, but I can't access it. All I can do is manually "purge" them (by deleting files on /root/.local/shate/Trash), but restoring would be very tricky (opening info files and manually moving, etc.). For non-/ partitions (or at least for VFAT/NTFS), I can not even use trash as root: it does not create a ./Trash-0 folder, it simply says "Cannot trash, want to permanently delete?" Why? About fstab: i use it for a permanent mount for my NTFS partitions. I have several, and if not "pre-mounted" they really clutter the desktop and/or Nautilus. I'd rather have it pre-mounted, integrated in my fs, in mounts like /data , /windows/xp , /windows/vista , and so on, and leave /media and its "mount/unmount" flexibility just for truly removable drives. So, if Ubuntu/Gnome truly follows the spec, is there any way to fix the root issues and to "emulate" the sticky bit for (at least) my fstab'ed NTFS fixed partitions?

    Read the article

  • Is there a good [and modern] reason to not have static HTML pages with AJAX content , rather than generate pages?

    - by user1725
    Assumptions: We don't care about IE6, and Noscript users. Lets pretend we have the following design concept: All your pages are HTML/CSS that create the ascetics, layout, colours, general design related things. Lets pretend this basic code below is that: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html> <head> <link href="/example.css" rel="stylesheet" type="text/css"/> <script src="example.js" type="text/javascript"></script> <head> <body> <div class="left"> </div> <div class="mid"> </div> <div class="right"> </div> </body> </html> Which in theory should produce, with the right CSS, three vertical columns on the web page. Now, here's the root of the question, what are the serious advantages and/or disadvantages of loading the content of these columns (lets assume they are all indeed dynamic content, not static) via AJAX requests, or have the content pre-set with a scripting language? So for instance, we would have, in the AJAX example, lets asume jquery is used on-load: //Multiple http requests $("body > div.left").load("./script.php?content=news"); $("body > div.right").load("./script.php?content=blogs"); $("body > div.mid").load("./script.php?content=links"); OR--- //Single http request $.ajax({ url: './script.php?content=news|blogs|links', method: 'json', type: 'text', success: function (data) { $("body > div.left").html(data.news); $("body > div.right").html(data.blogs); $("body > div.mid").html(data.links); } }) Verses doing this: <body> <div class="left"> <?php echo function_returning_news(); ?> </div> <div class="mid"> <?php echo function_returning_blogs(); ?> </div> <div class="right"> <?php echo function_returning_links(); ?> </div> </body> I'm personally thinking right now that doing static HTML pages is a better method, my reasoning is: I've separated my data, logic, and presentation (ie, "MVC") code. I can make changes to one without others. Browser caches mean I'm just getting server load mostly for the content, not the presentation wrapped around it. I could turn my "script.php" into a more robust API for the website. But I'm not certain or clear that these are legitimately good reasons, and I'm not confidently aware of other issues that could happen, so I would like to know the pros-and-cons, so to speak.

    Read the article

  • Surviving MATLAB and R as a Hardcore Programmer

    - by dsimcha
    I love programming in languages that seem geared towards hardcore programmers. (My favorites are Python and D.) MATLAB is geared towards engineers and R is geared towards statisticians, and it seems like these languages were designed by people who aren't hardcore programmers and don't think like hardcore programmers. I always find them somewhat awkward to use, and to some extent I can't put my finger on why. Here are some issues I have managed to identify: (Both): The extreme emphasis on vectors and matrices to the extent that there are no true primitives. (Both): The difficulty of basic string manipulation. (Both): Lack of or awkwardness in support for basic data structures like hash tables and "real", i.e. type-parametric and nestable, arrays. (Both): They're really, really slow even by interpreted language standards, unless you bend over backwards to vectorize your code. (Both): They seem to not be designed to interact with the outside world. For example, both are fairly bulky programs that take a while to launch and seem to not be designed to make simple text filter programs easy to write. Furthermore, the lack of good string processing makes file I/O in anything but very standard forms near impossible. (Both): Object orientation seems to have a very bolted-on feel. Yes, you can do it, but it doesn't feel much more idiomatic than OO in C. (Both): No obvious, simple way to get a reference type. No pointers or class references. For example, I have no idea how you roll your own linked list in either of these languages. (MATLAB): You can't put multiple top level functions in a single file, encouraging very long functions and cut-and-paste coding. (MATLAB): Integers apparently don't exist as a first class type. (R): The basic builtin data structures seem way too high level and poorly documented, and never seem to do quite what I expect given my experience with similar but lower level data structures. (R): The documentation is spread all over the place and virtually impossible to browse or search. Even D, which is often knocked for bad documentation and is still fairly alpha-ish, is substantially better as far as I can tell. (R): At least as far as I'm aware, there's no good IDE for it. Again, even D, a fairly alpha-ish language with a small community, does better. In general, I also feel like MATLAB and R could be easily replaced by plain old libraries in more general-purpose langauges, if sufficiently comprehensive libraries existed. This is especially true in newer general purpose languages that include lots of features for library writers. Why do R and MATLAB seem so weird to me? Are there any other major issues that you've noticed that may make these languages come off as strange to hardcore programmers? When their use is necessary, what are some good survival tips? Edit: I'm seeing one issue from some of the answers I've gotten. I have a strong personal preference, when I analyze data, to have one script that incorporates the whole pipeline. This implies that a general purpose language needs to be used. I hate having to write a script to "clean up" the data and spit it out, then another to read it back in a completely different environment, etc. I find the friction of using MATLAB/R for some of my work and a completely different language with a completely different address space and way of thinking for the rest to be a huge source of friction. Furthermore, I know there are glue layers that exist, but they always seem to be horribly complicated and a source of friction.

    Read the article

  • Content Challenge: You Can Only Get it Here

    - by Mike Stiles
    Part of the content conundrum for brands is figuring out what kind of content customers would find cool, desirable, and relevant. The mere fact many brands have no idea what this content might be is, in itself, pretty alarming. You’d have to have a pretty thorough lack of involvement with and understanding of your customers to not know what they might like. But despite what should be a great awakening in which consumers are using every technology and trick in the book to shield themselves from ads and commercials, brand self-obsession continues as marketers concentrate on their message, their campaign, what they want to say, and what they want social users to do. When individuals conduct themselves in that same fashion on Facebook and Twitter, it gets tiresome and starts losing value pretty quickly. Their posts eventually get hidden. Conversely, friends who post things that consistently entertain or inform, with little self-marketing desperation involved, win the coveted “show all updates” setting. Of course brands are going to use social to market. It’s pretty much the point of having social in the marketing mix. And yes, people who follow a brand’s Twitter account or “Like” a brand’s Facebook Page implicitly state they want to know what’s going on with that brand’s products and services. But if you have a Facebook friend that assumes you want every one of her posts to be about what wine she likes (Mitsubishi’s current campaign is even based around weeding out pretentious Facebook friends, then running them over), then you know how it must feel for your fans and followers to get a sales pitch for your crackers or whatever you’re selling every single time. Is there such a thing as content that doesn’t sell but that still advances the brand and makes the consumer more involved and valuable? Of course. And perhaps there are no better companies than enterprise brands to do it. Enterprise organizations are large enough to go beyond a product and engage readers/viewers at higher, broader levels…communicating expertise across entire sectors, subjects and industries. You’re going from pitchman to news source, and getting full credit for it as the presenter. A recent GigaOM article pointed out the success a San Francisco-based startup called Crunchyroll is having. Their niche (and they proudly admit it’s a niche) is providing Japanese anime, Korean drama and Asian live action content to countries that can’t get it any other way via licensing deals. Shows are available in HD and on the same day they air in the host country. Crunchyroll not only gets 8 million viewers a month, they have 100,000 paying subscribers at $7-12/month. Got a point, Mike? I do happen to have one. Crunchyroll illustrates the content opportunity enterprise companies have…which is to determine your “area,” the interest graph of your customers, then provide content that speaks to and satisfies those interests that can’t be found anywhere else. At least not in the same style, or of the same quality, or with the same authority. Do what no one else is doing. Provide what no one else is providing in your sector. If underserved users are willing to pay monthly for access to awkwardly moving cartoon dragons, imagine the audience you could attract with free, useful, non-sales content in your customers’ area of interest. It’s an audience you’ll want in place when the time does come to put out that marketing message. A content challenge is better than a content conundrum any day.

    Read the article

  • Project Euler 17: (Iron)Python

    - by Ben Griswold
    In my attempt to learn (Iron)Python out in the open, here’s my solution for Project Euler Problem 17.  As always, any feedback is welcome. # Euler 17 # http://projecteuler.net/index.php?section=problems&id=17 # If the numbers 1 to 5 are written out in words: # one, two, three, four, five, then there are # 3 + 3 + 5 + 4 + 4 = 19 letters used in total. # If all the numbers from 1 to 1000 (one thousand) # inclusive were written out in words, how many letters # would be used? # # NOTE: Do not count spaces or hyphens. For example, 342 # (three hundred and forty-two) contains 23 letters and # 115 (one hundred and fifteen) contains 20 letters. The # use of "and" when writing out numbers is in compliance # with British usage. import time start = time.time() def to_word(n): h = { 1 : "one", 2 : "two", 3 : "three", 4 : "four", 5 : "five", 6 : "six", 7 : "seven", 8 : "eight", 9 : "nine", 10 : "ten", 11 : "eleven", 12 : "twelve", 13 : "thirteen", 14 : "fourteen", 15 : "fifteen", 16 : "sixteen", 17 : "seventeen", 18 : "eighteen", 19 : "nineteen", 20 : "twenty", 30 : "thirty", 40 : "forty", 50 : "fifty", 60 : "sixty", 70 : "seventy", 80 : "eighty", 90 : "ninety", 100 : "hundred", 1000 : "thousand" } word = "" # Reverse the numbers so position (ones, tens, # hundreds,...) can be easily determined a = [int(x) for x in str(n)[::-1]] # Thousands position if (len(a) == 4 and a[3] != 0): # This can only be one thousand based # on the problem/method constraints word = h[a[3]] + " thousand " # Hundreds position if (len(a) >= 3 and a[2] != 0): word += h[a[2]] + " hundred" # Add "and" string if the tens or ones # position is occupied with a non-zero value. # Note: routine is broken up this way for [my] clarity. if (len(a) >= 2 and a[1] != 0): # catch 10 - 99 word += " and" elif len(a) >= 1 and a[0] != 0: # catch 1 - 9 word += " and" # Tens and ones position tens_position_value = 99 if (len(a) >= 2 and a[1] != 0): # Calculate the tens position value per the # first and second element in array # e.g. (8 * 10) + 1 = 81 tens_position_value = int(a[1]) * 10 + a[0] if tens_position_value <= 20: # If the tens position value is 20 or less # there's an entry in the hash. Use it and there's # no need to consider the ones position word += " " + h[tens_position_value] else: # Determine the tens position word by # dividing by 10 first. E.g. 8 * 10 = h[80] # We will pick up the ones position word later in # the next part of the routine word += " " + h[(a[1] * 10)] if (len(a) >= 1 and a[0] != 0 and tens_position_value > 20): # Deal with ones position where tens position is # greater than 20 or we have a single digit number word += " " + h[a[0]] # Trim the empty spaces off both ends of the string return word.replace(" ","") def to_word_length(n): return len(to_word(n)) print sum([to_word_length(i) for i in xrange(1,1001)]) print "Elapsed Time:", (time.time() - start) * 1000, "millisecs" a=raw_input('Press return to continue')

    Read the article

  • MaxTotalSizeInBytes - Blind spots in Usage file and Web Analytics Reports

    - by Gino Abraham
    Originally posted on: http://geekswithblogs.net/GinoAbraham/archive/2013/10/28/maxtotalsizeinbytes---blind-spots-in-usage-file-and-web-analytics.aspx http://blogs.msdn.com/b/sharepoint_strategery/archive/2012/04/16/usage-file-and-web-analytics-reports-with-blind-spots.aspx In my previous post (Troubleshooting SharePoint 2010 Web Analytics), I referenced a problem that can occur when exceeding the daily partition size for the LoggingDB, which generates the ULS message “[Partition] has exceeded the max bytes”. Below, I wanted to provide some additional info on this particular issue and help identify some options if this occurs. As an aside, this post only applies if you are missing portions of Usage data - think blind spots on intermittent days or user activity regularly sparse for the afternoon/evening. If this fits your scenario - read on. But if Usage logs are outright missing, go check out my Troubleshooting post first.  Background on the problem:The LoggingDB database has a default maximum size of ~6GB. However, SharePoint evenly splits this total size into fixed sized logical partitions – and the number of partitions is defined by the number of days to retain Usage data (by default 14 days). In this case, 14 partitions would be created to account for the 14 days of retention. If the retention were halved to 7 days, the LoggingDBwould be split into 7 corresponding partitions at twice the size. In other words, the partition size is generally defined as [max size for DB] / [number of retention days].Going back to the default scenario, the “max size” for the LoggingDB is 6200000000 bytes (~6GB) and the retention period is 14 days. Using our formula, this would be [~6GB] / [14 days], which equates to 444858368 bytes (~425MB) per partition per day. Again, if the retention were halved to 7 days (which halves the number of partitions), the resulting partition size becomes [~6GB] / [7 days], or ~850MB per partition.From my experience, when the partition size for any given day is exceeded, the usage logging for the remainder of the day is essentially thrown away because SharePoint won’t allow any more to be written to that day’s partition. The only clue that this is occurring (beyond truncated usage data) is an error such as the following that gets reported in the ULS:04/08/2012 09:30:04.78    OWSTIMER.EXE (0x1E24)    0x2C98    SharePoint Foundation    Health    i0m6     High    Table RequestUsage_Partition12 has 444858368 bytes that has exceeded the max bytes 444858368It’s also worth noting that the exact bytes reported (e.g. ‘444858368’ above) may slightly vary among farms. For example, you may instead see 445226812, 439123456, or something else in the ballpark. The exact number itself doesn't matter, but this error message intends to indicates that the reporting usage has exceeded the partition size for the given day.What it means:The error itself is easy to miss, which can lead to substantial gaps in the reporting data (your mileage may vary) if not identified. At this point, I can only advise to periodically check the ULS logs for this message. Down the road, I plan to explore if [Developing a Custom Health Rule] could be leveraged to identify the issue (If you've ever built Custom Health Rules, I'd be interested to hear about your experiences). Overcoming this issue also poses a challenge, with workaround options including:Lower the retentionBecause the partition size is generally defined as [max size] / [number of retention days], the first option is to lower the number of days to retain the data – the lower the retention, the lower the divisor and thus a bigger partition. For example, halving the retention from 14 to 7 days would halve the number of partitions, but double the partition size to ~850MB (e.g. [6200000000 bytes] / [7 days] = ~850GB partitions). Lowering it to 2 days would result in two ~3GB partitions… and so on.Recreate the LoggingDB with an increased sizeThe property MaxTotalSizeInBytes is exposed by OM code for the SPUsageDefinition object and can be updated with the example PowerShell snippet below. However, updating this value has no immediate impact because this size only applies when creating a LoggingDB. Therefore, you must create a newLoggingDB for the Usage Service Application. The gotcha: this effectively deletes all prior Usage databecause the Usage Service Application can only have a single LoggingDB.Here is an example snippet to update the "Page Requests" Usage Definition:$def=Get-SPUsageDefinition -Identity "page requests" $def.MaxTotalSizeInBytes=12400000000 $def.update()Create a new Logging database and attach to the Usage Service Application using the following command: Get-spusageapplication | Set-SPUsageApplication -DatabaseServer <dbServer> -DatabaseName <newDBname> Updated (5/10/2012): Once the new database has been created, you can confirm the setting has truly taken by running the following SQL Query (be sure to replace the database name in the following query with the name provided in the PowerShell above)SELECT * FROM [WSS_UsageApplication].[dbo].[Configuration] WITH (nolock) WHERE ConfigName LIKE 'Max Total Bytes - RequestUsage'

    Read the article

  • Sharing configuration settings between Windows Azure roles

    - by theo.spears
    If you are working on a medium-large Windows Azure project it's likely it will involve more than one role, for example separate web and worker roles. Unfortunately although all the windows azure configuration settings are stored in a single cscfg file, there is no way to share configuration settings between multiple roles. This means you have to duplicate common settings like connection strings across all your roles. There is an open Connect issue about this topic, but Microsoft have not said when they will fix it. In the mean time I've put together a dirty dirty hack cunning workaround that creates a fake role containing your shared configuration settings, and copies it to all roles as part of the build process. Here's how you set it up: 1. Download the zip file attached to this post, and unzip it into the folder containing your Azure project (not your solution folder). 2. Edit your csdef and cscfg files to include the placeholder project ServiceDefinition.csdef<?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="AzureSpendNotifier" http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition%22"http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="GLOBAL"> <ConfigurationSettings> <Setting name="ExampleSetting" /> </ConfigurationSettings> </WorkerRole> <WorkerRole name="MyWorker"> <ConfigurationSettings> </ConfigurationSettings> </WorkerRole> <WebRole name="MyWeb"> <Sites> <Site name="Web"> <Bindings> <Binding name="WebEndpoint" endpointName="WebEndpoint" /> </Bindings> </Site> </Sites> <ConfigurationSettings> </ConfigurationSettings> </WebRole> </ServiceDefinition> ServiceConfiguration.cscfg<?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="AzureSpendNotifier" xmlns=http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration osFamily="1" osVersion="*"> <Role name="GLOBAL"> <ConfigurationSettings> <Setting name="ExampleSetting" value="Hello World" /> </ConfigurationSettings> <Instances count="1" /> </Role> <Role name="MyWorker"> <Instances count="1" /> <ConfigurationSettings> </ConfigurationSettings> </Role> <Role name="MyWeb"> <Instances count="1" /> <ConfigurationSettings> </ConfigurationSettings> </Role> </ServiceConfiguration> It is important that all your roles contain a ConfigurationSettings entry in both cscfg and csdef files, even if it's empty- otherwise the shared configuration settings will not be inserted. 3. Open your azure deployment (.ccproj) project in notepad, and add the highlighted line below: ... <Import Project="$(CloudExtensionsDir)Microsoft.CloudService.targets" /> <Import Project="globalsettings/globalsettings.targets" /> </Project> It is important you add this below the Microsoft.CloudService.targets import line, as it replaces some of the rules defined in that file. Visual studio will prompt you to reload the project, say yes. At this point you will have a new Azure role called 'GLOBAL' with settings you can edit through the visual studio properties panel as normal. This role will never be deployed, but any settings you add to it will be copied to all your other roles when deployed or tested locally within visual studio.

    Read the article

  • Mind Reading with the Raspberry Pi

    - by speakjava
    Mind Reading With The Raspberry Pi At JavaOne in San Francisco I did a session entitled "Do You Like Coffee with Your Dessert? Java and the Raspberry Pi".  As part of this I showed some demonstrations of things I'd done using Java on the Raspberry Pi.  This is the first part of a series of blog entries that will cover all the different aspects of these demonstrations. A while ago I had bought a MindWave headset from Neurosky.  I was particularly interested to see how this worked as I had had the opportunity to visit Neurosky several years ago when they were still developing this technology.  At that time the 'headset' consisted of a headband (very much in the Bjorn Borg style) with a sensor attached and some wiring that clearly wasn't quite production ready.  The commercial version is very simple and easy to use: there are two sensors, one which rests on the skin of your forehead, the other is a small clip that attaches to your earlobe. Typical EEG sensors used in hospitals require lots of sensors and they all need copious amounts of conductive gel to ensure the electrical signals are picked up.  Part of Neurosky's innovation is the development of this simple dry-sensor technology.  Having put on the sensor and turned it on (it powers off a single AAA size battery) it collects data and transmits it to a USB dongle plugged into a PC, or in my case a Raspberry Pi. From a hacking perspective the USB dongle is ideal because it does not require any special drivers for any complex, low level USB communication.  Instead it appears as a simple serial device, which on the Raspberry Pi is accessed as /dev/ttyUSB0.  Neurosky have published details of the command protocol.  In addition, the MindSet protocol document, including sample code for parsing the data from the headset, can be found here. To get everything working on the Raspberry Pi using Java the first thing was to get serial communications going.  Back in the dim distant past there was the Java Comm API.  Sadly this has grown a bit dusty over the years, but there is a more modern open source project that provides compatible and enhanced functionality, RXTXComm.  This can be installed easily on the Pi using sudo apt-get install librxtx-java.  Next I wrote a library that would send commands to the MindWave headset via the serial port dongle and read back data being sent from the headset.  The design is pretty simple, I used an event based system so that code using the library could register listeners for different types of events from the headset.  You can download a complete NetBeans project for this here.  This includes javadoc API documentation that should make it obvious how to use it (incidentally, this will work on platforms other than Linux.  I've tested it on Windows without any issues, just by changing the device name to something like COM4). To test this I wrote a simple application that would connect to the headset and then print the attention and meditation values as they were received from the headset.  Again, you can download the NetBeans project for that here. Oracle recently released a developer preview of JavaFX on ARM which will run on the Raspberry Pi.  I thought it would be cool to write a graphical front end for the MindWave data that could take advantage of the built in charts of JavaFX.  Yet another NetBeans project is available here.  Screen shots of the app, which uses a very nice dial from the JFxtras project, are shown below. I probably should add labels for the EEG data so the user knows which is the low alpha, mid gamma waves and so on.  Given that I'm not a neurologist I suspect that it won't increase my understanding of what the (rather random looking) traces mean. In the next blog I'll explain how I connected a LEGO motor to the GPIO pins on the Raspberry Pi and then used my mind to control the motor!

    Read the article

  • Mass Metadata Updates with Folders

    - by Kyle Hatlestad
    With the release of WebCenter Content PS5, a new folder architecture called 'Framework Folders' was introduced.  This is meant to replace the folder architecture of 'Folders_g'.  While the concepts of a folder structure and access to those folders through Desktop Integration Suite remain the same, the underlying architecture of the component has been completely rewritten.  One of the main goals of the new folders is to scale better at large volumes and remove the limitations of 1000 content items or sub-folders within a folder.  Along with the new architecture, it has a new look and a few additional features have been added.  One of those features are Query Folders.  These are folders that are populated simply by a query rather then literally putting items within the folders.  This is something that the Library has provided, but it always took an administrator to define them through the Web Layout Editor.  Now users can quickly define query folders anywhere within the standard folder hierarchy.   Within this new Framework Folders is the very handy ability to do metadata updates.  It's similar to the Propagate feature in Folders_g, but there are some key differences that make this very flexible and much more powerful. It's used within regular folders and Query Folders.  So the content you're updating doesn't all have to be in the same folder...or a folder at all.   The user decides what metadata to propagate.  In Folders_g, the system administrator controls which fields will be propagated using a single administration page.  In Framework Folders, the user decides at that time which fields they want to update. You set the value you want on the propagation screen.  In Folders_g, it used the metadata defined on the parent folder to propagate.  With Framework Folders, you supply the new metadata value when you select the fields you want to update.  It does not have to be defined on the parent folder. Because of these differences, I think the new propagate method is much more useful.  Instead of always having to rely on Archiver or a custom spreadsheet, you can quickly do mass metadata updates right within folders.   Here are the basic steps to perform propagation. First create a folder for the propagation.  You can use a regular folder, but a Query Folder will work as well. Go into the folder to get the results.   In the Edit menu, select 'Propagate'. Select the check-box next to the field to update and enter the new value  Click the Propagate button. Once complete, a dialog will appear showing it is complete What's also nice is that the process happens asynchronously in the background which means you can browse to other pages and do other things while it is still working.  You aren't stuck on the page waiting for it to complete.  In addition, you can add a configuration flag to the server to turn on a status indicator icon.  Set 'FldEnableInProcessIndicator=1' and it will show a working icon as its doing the propagation. There is a caveat when using the propagation on a Query Folder.   While a propagation on a regular folder will update all of the items within that folder, a Query Folder propagation will only update the first 50 items.  So you may need to run it multiple times depending on the size...and have the query exclude the items as they get updated. One extra note...Framework Folders is offered as the default folder architecture in the PS5 release of WebCenter Content.  But if you are using WebCenter Content integrated with another product that makes use of folders (WebCenter Portal/Spaces, Fusion Applications, Primavera, etc), you'll need to continue using Folders_g until they are updated to use the new folders.

    Read the article

  • Drinking Our Own Champagne: Fusion Accounting Hub at Oracle

    - by Di Seghposs
    A guest post by Corey West, Senior Vice President, Oracle's Corporate Controller and Chief Accounting Officer There's no better story to tell than one about Oracle using its own products with blowout success. Here's how this one goes. As you know, Oracle has increased its share of the software market through a number of high-profile acquisitions. Legally combining companies is a very complicated process -- it can take months to complete, especially for the acquisitions with offices in several countries, each with its own unique laws and regulations. It's a mission critical and time sensitive process to roll an acquired company's legacy systems (running vital operations, such as accounts receivable and general ledger (GL)) into the existing systems at Oracle. To date, we've run our primary financial ledgers in E-Business Suite R12 -- and we've successfully met the requirements of the business and closed the books on time every single quarter. But there's always room for improvement and that comes in the form of Fusion Applications. We are now live on Fusion Accounting Hub (FAH), which is the first critical step in moving to a full Fusion Financials instance. We started with FAH so that we could design a global chart of accounts. Eventually, every transaction in every country will originate from this global chart of accounts -- it becomes the structure for managing our business more uniformly. In conjunction, we're using Oracle Hyperion Data Relationship Management (DRM) to centralize and automate governance of our global chart of accounts and related hierarchies, which will help us lower our costs and greatly reduce risk. Each month, we have to consolidate data from our primary general ledgers. We have been able to simplify this process considerably using FAH. We can now submit our primary ledgers running in E-Business Suite (EBS) R12 directly to FAH, eliminating the need for more than 90 redundant consolidation ledgers. Also we can submit incrementally, so if we need to book an adjustment in a primary ledger after close, we can do so without re-opening it and re-submitting. As a result, we have earlier visibility to period-end actuals during the close. A goal of this implementation, and one that we successfully achieved, is that we are able to use FAH globally with no customization. This means we have the ability to fully deploy ledger sets at the consolidation level, plus we can use standard functionality for currency translation and mass allocations. We're able to use account monitoring and drill down functionality from the consolidation level all the way through to EBS primary ledgers and sub-ledgers, which allows someone to click through a transaction appearing at the consolidation level clear through to its original source, a significant productivity enhancement when doing research. We also see a significant improvement in reporting using Essbase cube and Hyperion Smart View. Specifically, "the addition of an Essbase cube on top of the GL gives us tremendous versatility to automate and speed our elimination process," says Claire Sebti, Senior Director of Corporate Accounting at Oracle. A highlight of this story is that FAH is running in a co-existence environment. Our plan is to move to Fusion Financials in steps, starting with FAH. Next, our Oracle Financial Services Software subsidiary will move to a full Fusion Financials instance. Then we'll replace our EBS instance with Fusion Financials. This approach allows us to plan in steps, learn as we go, and not overwhelm our teams. It also reduces the risk that comes with moving the entire instance at once. Maria Smith, Vice President of Global Controller Operations, is confident about how they've positioned themselves to uptake more Fusion functionality and is eager to "continue to drive additional efficiency and cost savings." In this story, the happy customers are Oracle controllers, financial analysts, accounting specialists, and our management team that get earlier access to more flexible reporting. "Fusion Accounting Hub simplifies our processes and gives us more transparency into account activity," raves Alex SanJuan, Senior Director, Record to Report Strategic Process Owner. Overall, the team has been very impressed with the usability and functionality of FAH and are pleased with the quantifiable improvements. Claire Sebti states, "Our WD5 close activities have been reduced by at least four hours of system processing time, just for the consolidation group." Fusion Accounting Hub is an inspiring beginning to our Fusion Financials implementation story. There's no doubt it's going to be an international bestseller! Corey West, Senior Vice President Oracle's Corporate Controller and Chief Accounting Officer

    Read the article

  • Nest reinvents smoke detectors. Introduces smart and talking smoke detector that keeps quite when you wave

    - by Gopinath
    Nest, the leading smart thermostat maker has introduced a smart home device today- Nest Protect, a smart, talking smoke & carbon monoxide detector that can quite when you wave your hand. Less annoyances and more intelligence Smoke detectors are around for hundreds of years and playing a major role in providing safety from fire accidents at home. But the technology of these devices is stale and there is no major innovation for the past several years. With the introduction of Nest Protect, the landscape of smoke detectors is all set to change just like how Nest thermostat redefined the industry two years ago. Nest Protect is internet enabled and equipped with motion- and smoke-detection sensors so that when it starts beeping you can silence it by waving hand instead of doing circus feats to turn off the alarm. Everyone who cooks in a home equipped with smoke detector would know how annoying it is to turn off sensitive smoke detectors that goes off control quite often. Apart from addressing the annoyances of regular smoke detector, Nest Protect has talking capabilities. It can alert users with clear & actionable instructions when it detects a danger. Instead of harsh beeps it actually speak to you so you know what is happening. It will tell you what smoke it has detected and in which room it is detected. Multiple Nest Protects installed in a home can communicate with each other. Lets say that there is a smoke in bed room, the Nest Protect installed in bed room shares this information to all Nest Protects installed in the home and your kitchen device can alert you that there is a smoke in bed room. There is an App for that The internet enabled Nest Protect has an app to view its status and various alerts. When the Protect is running on low battery it alerts you to replace them soon. If there is a smoke at home and you are away, you will get message alerts. The app works on all major smartphones as well as tablets. Auto shuts down gas furnaces/heaters on smoke Apart from forming a network with other Nest Protect devices installed at home, they can also communicate with Nest Thermostat if it is installed. When carbon monoxide is detected it can shut off your gas furnace automatically. Also with the help of motion detectors it improves Nest Thermostat’s auto-away functionality. It looks elegant and costs a lot more than a regular smoke detector Just like Nest Thermostat, Nest Protect is elegant and adorable. You just fall in love with it the moment you see it. It’s another master piece from the designer of Apple’s iPod. All is good with the Nest Protect, except the price!! It costs whooping $129, which is almost 4 times more expensive than the best selling conventional thermostats available at $30. A single bed room apartment would require at least 3 detectors and it costs around $390 to install Nest Protects compared to 90$ required for conventional smoke detectors. Though Nest Thermostat is an expensive one compared to conventional thermostats, it offered great savings through its intelligent auto-away feature. Users were able to able to see returns on their investments. If Nest Protect also can provide good return on investment the it will be very successful.

    Read the article

  • Oracle is #1 in Life Sciences!

    - by Michael Snow
    Guest post today by: John Klinke, Senior Principal Product Manager, Oracle WebCenter Content 12.00 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-fareast-font-family:Calibri; mso-bidi-font-family:"Times New Roman";} Based on the announcement last week at EMC World about Documentum for Life Sciences, it looks like EMC is starting to have regrets about pulling out of the life sciences space over the last few years. Certainly their content management customers and partners in life sciences have noticed their retreat. Many of them are now talking to us about WebCenter Content since they’ve seen the writing on the wall regarding Documentum’s decline, including falling revenue, shrinking investment, departure of key executives, and EMC’s auditing of existing customers. While EMC has been neglecting the life sciences industry over the last few years, Oracle has been increasing its investment and commitment by providing best-of-breed solutions to enable pharmaceutical, medical device, biotech and CRO companies to improve productivity and drive innovation. As a result, according to IDC Health Insights, Oracle is #1 in life sciences. From research and development through clinical development and manufacturing to sales and marketing, Oracle provides the solutions that life sciences companies depend on to accelerate R&D, expedite clinical trials, and speed time to market. Specifically for Oracle WebCenter, our life sciences business is booming thanks to our comprehensive offerings led by Oracle WebCenter Content, our 21 CFR Part 11 compliant enterprise content management platform. Unlike Documentum, WebCenter Content is all about keeping the cost of ownership low - through simplicity, flexibility, and out-of-the-box integrations. WebCenter Content is a single, comprehensive ECM platform that can handle all your content management needs, from controlled documents to digital asset management, records management and document imaging and capture. And it is much more flexible, letting you do configuration changes instead of customizations to meet your business needs. It also saves you money by being pre-integrated with the rest of the Oracle Fusion Middleware technology stack and with leading enterprise applications like Siebel (including Siebel CTMS), Primavera, E-Business Suite, JD Edwards and PeopleSoft. So if you think EMC’s announcement last week was too little and too late, I’m happy to report that Oracle is here to help. Back in October, we announced our Move Off Documentum offer, which provides a 100% trade-in credit for your Documentum licenses when you purchase Oracle WebCenter, and the good news is, this offer is still available for a limited time. So stop maintaining Documentum and start innovating with Oracle WebCenter. For more details see www.oracle.com/moveoff/documentum.

    Read the article

  • Oracle Social Network Developer Challenge: Fishbowl Solutions

    - by Kellsey Ruppel
    Originally posted by Jake Kuramoto on The Apps Lab blog. Today, I give you the final entry in the Oracle Social Network Developer Challenge, held last week during OpenWorld. This one comes from Friend of the ‘Lab and Fishbowl Solutions (@fishbowle20) hacker, John Sim (@jrsim_uix), whom you might remember from his XBox Kinect demo at COLLABORATE 12 (presentation slides and abstract) hacks and other exploits with WebCenter. We put this challenge together specifically for developers like John, who like to experiment with new tools and push the envelope of what’s possible and build cool things, and as you can see from his entry John did just that, mashing together Google Maps and Oracle Social Network into a mobile app built with PhoneGap that uses the device’s camera and GPS to keep teams on the move in touch. He calls it a Mobile GeoTagging Solution, but I think Avengers Assemble! would have equally descriptive, given that was obviously his inspiration. Here’s his description of the mobile app: My proposed solution was to design and simplify GeoLocation mapping, and automate updates for users and teams on the move; who don’t have access to a laptop or want to take their ipads out – but allow them to make quick updates to OSN and upload photos taken from their mobile device – there and then. As part of this; the plan was to include a rules engine that could be configured by the user to allow the device to automatically update and post messages when they arrived at a set location(s). Inspiration for this came from on{x} – automate your life. Unfortunately, John didn’t make it to the conference to show off his hard work in person, but luckily, he had a colleague from Fishbowl and a video to showcase his work.    Here are some shots of John’s mobile app for your viewing pleasure: John’s thinking is sound. Geolocation is usually relegated to consumer use cases, thanks to services like foursquare, but distributed teams working on projects out in the world definitely need a way to stay in contact. Consider a construction job. Different contractors all converge on a single location, and time is money. Rather than calling or texting each other and risking a distracted driving accident, an app like John’s allows everyone on the job to see exactly where the other contractors are. Using his GPS rules, they could easily be notified about how close each is to the site, definitely useful when you have a flooring contractor sitting idle, waiting for an electrician to finish the wiring. The best part is that the project manager or general contractor could stay updated on all the action (or inaction) using Oracle Social Network, either sitting at a desk using the browser app or desktop client or on the go, using one of the native mobile apps built for Oracle Social Network. I can see this being used by insurance adjusters too, and really any team that, erm, assembles at a given spot. Of course, it’s also useful for meeting at the pub after the day’s work is done. Beyond people, this solution could also be implemented for physical objects that are in route to a destination. Say you’re a customer waiting on rail shipment or a package delivery. You could track your valuable’s whereabouts easily as they report their progress via checkins. If they deviated from the GPS rules, you’d be notified. You might even be able to get a picture into Oracle Social Network with some light hacking. Thanks to John and his colleagues at Fishbowl for participating in our challenge. We hope everyone had a good experience. Make sure to check out John’s blog post on his work and the experience using Oracle Social Network. Although this is the final, official entry we had, tomorrow, I’ll show you the work of someone who finished code, but wasn’t able to make the judging event. Stay tuned.

    Read the article

  • Hide and Unhide Worksheets and Workbooks in Excel 2007 & 2010

    - by DigitalGeekery
    Hiding worksheets can be a simple way to protect data in Excel, or just a way reduce the clutter of a some tabs. Here are a couple very easy ways to hide and unhide worksheets and workbooks in Excel 2007 / 2010. Hiding a Worksheet Select the Worksheet you’d like to hide by clicking on the tab at the bottom. By holding down the Ctrl key while clicking you can select multiple tabs at one time. On the Home tab, click on Format, which can be found in the Cells group. Under Visibility,  select Hide & Unhide, then Hide Sheet.   You can also simply right-click on the tab, and select Hide.   Your worksheet will no longer be visible, however, the data contained in the worksheet can still be referenced on other worksheets.   Unhide a Worksheet To unhide a worksheet, you just do the opposite. On the Home tab, click on Format in the Cells group and then under Visibility,  select Hide & Unhide, then Unhide Sheet.   Or, you can right-click on any visible tab, and select Unhide.   In the Unhide pop up window, select the worksheet to unhide and click “OK.” Note: Although you can hide multiple sheets at once, you can only unhide one sheet at a time. Very Hidden Mode While hidden mode is nice, it’s not exactly ultra-secure. If you’d like to pump the security up a notch, there is also Very Hidden mode. To access Very Hidden setting, we’ll have to use the built-in Visual Basic Editor by hitting the Alt + F11 keys. Select the worksheet you wish to hide from the dropdown list under Properties or by single clicking the worksheet in the VBAProject window. Next, set the Visible property to  2 – xlSheetVeryHidden. Close out of the Visual Basic Editor when finished.   When the Very Hidden attribute is set on a worksheet, Unhide Sheet is still unavailable from within the Format setting on the Home tab.   To remove the Very Hidden attribute and display the worksheet again, go back into the Visual Basic Editor by hitting Alt + F11 again and setting the Visible property back to –1 – xlSheetVisible.  Close out of the Editor when finished. Hiding a Workbook To hide the entire Workbook, select the View tab, and then click the Hide button. You’ll see the Workbook has disappeared. Unhide a Workbook Select the View tab and click Unhide… … and your Workbook will be visible again.   Just a few simple ways to hide and unhide your Excel worksheets and workbooks. Similar Articles Productive Geek Tips How To Copy Worksheets in Excel 2007 & 2010Add Background Pictures To Excel 2007 WorksheetsMake Row Labels In Excel 2007 Freeze For Easier ReadingImport Microsoft Access Data Into ExcelMagnify Selected Cells In Excel 2007 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Discover Latest Android Apps On AppBrain The Ultimate Guide For YouTube Lovers Will it Blend? iPad Edition Penolo Lets You Share Sketches On Twitter Visit Woolyss.com for Old School Games, Music and Videos Add a Custom Title in IE using Spybot or Spyware Blaster

    Read the article

  • Cocos2d Box2d how to shoot an object inside the screen

    - by Ahoura Ghotbi
    I have the code below : - (id) initWithGame:(mainGame*)game { if ((self = [super init])) { isTouchEnabled_ = YES; self.game = game; CGSize size = [[CCDirector sharedDirector] winSize]; screenH = size.height; screenW = size.width; character = [CCSprite spriteWithFile:@"o.png"]; character.position = ccp( size.width /2 , size.height/2 ); character.contentSize = CGSizeMake(72, 79); [self addChild:character z:10 tag:1]; _body = NULL; _radius = 14.0f; // Create ball body and shape b2BodyDef ballBodyDef; ballBodyDef.type = b2_dynamicBody; ballBodyDef.position.Set(100/PTM_RATIO, 100/PTM_RATIO); ballBodyDef.userData = character; _body = _game.world->CreateBody(&ballBodyDef); b2CircleShape circle; circle.m_radius = 26.0/PTM_RATIO; b2FixtureDef ballShapeDef; ballShapeDef.shape = &circle; ballShapeDef.density = 1.0f; ballShapeDef.friction = 0.2f; ballShapeDef.restitution = 0.8f; _body->CreateFixture(&ballShapeDef); [self schedule:@selector(gameChecker:)]; [self schedule:@selector(tick:)]; } return self; } - (void)gameChecker:(ccTime) dt{ if(character.position.y > 200){ [self unschedule:@selector(tick:)]; [self schedule:@selector(dropObject:)]; } } - (void)tick:(ccTime) dt { b2Vec2 force; force.Set(_body->GetLinearVelocity().x, _body->GetLinearVelocity().y+1.0f); for (b2Body* b = _game.world->GetBodyList(); b; b = b->GetNext()) { if (b->GetUserData() == character) { b->SetLinearVelocity(force); } } _game.world->Step(dt, 10, 10); for(b2Body *b = _game.world->GetBodyList(); b; b=b->GetNext()) { if (b->GetUserData() != NULL) { CCSprite *ballData = (CCSprite *)b->GetUserData(); ballData.position = ccp(b->GetPosition().x * PTM_RATIO, b->GetPosition().y * PTM_RATIO); ballData.rotation = -1 * CC_RADIANS_TO_DEGREES(b->GetAngle()); } } } -(void)dropObject:(ccTime) dt{ b2Vec2 force; force.Set(_body->GetLinearVelocity().x, _body->GetLinearVelocity().y-1.0f); for (b2Body* b = _game.world->GetBodyList(); b; b = b->GetNext()) { if (b->GetUserData() == character) { b->SetLinearVelocity(force); } } _game.world->Step(dt, 10, 10); for(b2Body *b = _game.world->GetBodyList(); b; b=b->GetNext()) { if (b->GetUserData() != NULL) { CCSprite *ballData = (CCSprite *)b->GetUserData(); ballData.position = ccp(b->GetPosition().x * PTM_RATIO, b->GetPosition().y * PTM_RATIO); ballData.rotation = -1 * CC_RADIANS_TO_DEGREES(b->GetAngle()); } } } I have been trying to get the effect that fruit ninja has when shooting the fruits. but it seems like its hard to get such animation so I was wondering if anyone can point me to the right direction and/or give me a sample code for a single object that gets thrown into the screen with a direction.

    Read the article

  • Thoughts on my new template language/HTML generator?

    - by Ralph
    I guess I should have pre-faced this with: Yes, I know there is no need for a new templating language, but I want to make a new one anyway, because I'm a fool. That aside, how can I improve my language: Let's start with an example: using "html5" using "extratags" html { head { title "Ordering Notice" jsinclude "jquery.js" } body { h1 "Ordering Notice" p "Dear @name," p "Thanks for placing your order with @company. It's scheduled to ship on {@ship_date|dateformat}." p "Here are the items you've ordered:" table { tr { th "name" th "price" } for(@item in @item_list) { tr { td @item.name td @item.price } } } if(@ordered_warranty) p "Your warranty information will be included in the packaging." p(class="footer") { "Sincerely," br @company } } } The "using" keyword indicates which tags to use. "html5" might include all the html5 standard tags, but your tags names wouldn't have to be based on their HTML counter-parts at all if you didn't want to. The "extratags" library for example might add an extra tag, called "jsinclude" which gets replaced with something like <script type="text/javascript" src="@content"></script> Tags can be optionally be followed by an opening brace. They will automatically be closed at the closing brace. If no brace is used, they will be closed after taking one element. Variables are prefixed with the @ symbol. They may be used inside double-quoted strings. I think I'll use single-quotes to indicate "no variable substitution" like PHP does. Filter functions can be applied to variables like @variable|filter. Arguments can be passed to the filter @variable|filter:@arg1,arg2="y" Attributes can be passed to tags by including them in (), like p(class="classname"). You will also be able to include partial templates like: for(@item in @item_list) include("item_partial", item=@item) Something like that I'm thinking. The first argument will be the name of the template file, and subsequent ones will be named arguments where @item gets the variable name "item" inside that template. I also want to have a collection version like RoR has, so you don't even have to write the loop. Thoughts on this and exact syntax would be helpful :) Some questions: Which symbol should I use to prefix variables? @ (like Razor), $ (like PHP), or something else? Should the @ symbol be necessary in "for" and "if" statements? It's kind of implied that those are variables. Tags and controls (like if,for) presently have the exact same syntax. Should I do something to differentiate the two? If so, what? This would make it more clear that the "tag" isn't behaving like just a normal tag that will get replaced with content, but controls the flow. Also, it would allow name-reuse. Do you like the attribute syntax? (round brackets) How should I do template inheritance/layouts? In Django, the first line of the file has to include the layout file, and then you delimit blocks of code which get stuffed into that layout. In CakePHP, it's kind of backwards, you specify the layout in the controller.view function, the layout gets a special $content_for_layout variable, and then the entire template gets stuffed into that, and you don't need to delimit any blocks of code. I guess Django's is a little more powerful because you can have multiple code blocks, but it makes your templates more verbose... trying to decide what approach to take Filtered variables inside quotes: "xxx {@var|filter} yyy" "xxx @{var|filter} yyy" "xxx @var|filter yyy" i.e, @ inside, @ outside, or no braces at all. I think no-braces might cause problems, especially when you try adding arguments, like @var|filter:arg="x", then the quotes would get confused. But perhaps a braceless version could work for when there are no quotes...? Still, which option for braces, first or second? I think the first one might be better because then we're consistent... the @ is always nudged up against the variable. I'll add more questions in a few minutes, once I get some feedback.

    Read the article

  • EF4 Code First Control Unicode and Decimal Precision, Scale with Attributes

    - by Dane Morgridge
    There are several attributes available when using code first with the Entity Framework 4 CTP5 Code First option.  When working with strings you can use [MaxLength(length)] to control the length and [Required] will work on all properties.  But there are a few things missing. By default all string will be created using unicode so you will get nvarchar instead of varchar.  You can change this using the fluent API or you can create an attribute to make the change.  If you have a lot of properties, the attribute will be much easier and require less code. You will need to add two classes to your project to create the attribute itself: 1: public class UnicodeAttribute : Attribute 2: { 3: bool _isUnicode; 4:  5: public UnicodeAttribute(bool isUnicode) 6: { 7: _isUnicode = isUnicode; 8: } 9:  10: public bool IsUnicode { get { return _isUnicode; } } 11: } 12:  13: public class UnicodeAttributeConvention : AttributeConfigurationConvention<PropertyInfo, StringPropertyConfiguration, UnicodeAttribute> 14: { 15: public override void Apply(PropertyInfo memberInfo, StringPropertyConfiguration configuration, UnicodeAttribute attribute) 16: { 17: configuration.IsUnicode = attribute.IsUnicode; 18: } 19: } The UnicodeAttribue class gives you a [Unicode] attribute that you can use on your properties and the UnicodeAttributeConvention will tell EF how to handle the attribute. You will need to add a line to the OnModelCreating method inside your context for EF to recognize the attribute: 1: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) 2: { 3: modelBuilder.Conventions.Add(new UnicodeAttributeConvention()); 4: base.OnModelCreating(modelBuilder); 5: } Once you have this done, you can use the attribute in your classes to make sure that you get database types of varchar instead of nvarchar: 1: [Unicode(false)] 2: public string Name { get; set; }   Another option that is missing is the ability to set the precision and scale on a decimal.  By default decimals get created as (18,0).  If you need decimals to be something like (9,2) then you can once again use the fluent API or create a custom attribute.  As with the unicode attribute, you will need to add two classes to your project: 1: public class DecimalPrecisionAttribute : Attribute 2: { 3: int _precision; 4: private int _scale; 5:  6: public DecimalPrecisionAttribute(int precision, int scale) 7: { 8: _precision = precision; 9: _scale = scale; 10: } 11:  12: public int Precision { get { return _precision; } } 13: public int Scale { get { return _scale; } } 14: } 15:  16: public class DecimalPrecisionAttributeConvention : AttributeConfigurationConvention<PropertyInfo, DecimalPropertyConfiguration, DecimalPrecisionAttribute> 17: { 18: public override void Apply(PropertyInfo memberInfo, DecimalPropertyConfiguration configuration, DecimalPrecisionAttribute attribute) 19: { 20: configuration.Precision = Convert.ToByte(attribute.Precision); 21: configuration.Scale = Convert.ToByte(attribute.Scale); 22:  23: } 24: } Add your line to the OnModelCreating: 1: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) 2: { 3: modelBuilder.Conventions.Add(new UnicodeAttributeConvention()); 4: modelBuilder.Conventions.Add(new DecimalPrecisionAttributeConvention()); 5: base.OnModelCreating(modelBuilder); 6: } Now you can use the following on your properties: 1: [DecimalPrecision(9,2)] 2: public decimal Cost { get; set; } Both these options use the same concepts so if there are other attributes that you want to use, you can create them quite simply.  The key to it all is the PropertyConfiguration classes.   If there is a class for the datatype, then you should be able to write an attribute to set almost everything you need.  You could also create a single attribute to encapsulate all of the possible string combinations instead of having multiple attributes on each property. All in all, I am loving code first and having attributes to control database generation instead of using the fluent API is huge and saves me a great deal of time.

    Read the article

  • WebLogic Server–Use the Execution Context ID in Applications–Lessons From Hansel and Gretel

    - by james.bayer
    I learned a neat trick this week.  Don’t let your breadcrumbs go to waste like Hansel and Gretel did!  Keep track of the code path, logs and errors for each request as they flow through the system.  Earlier this week an OTN forum post in the WLS – General category by Oracle Ace John Stegeman asked a question how to retrieve the Execution Context ID so that it could be used on an error page that a user could provide to a help desk or use to check with application administrators so they could look up what went wrong.  What is the Execution Context ID (ECID)?  Fusion Middleware injects an ECID as a request enters the system and it says with the request as it flows from Oracle HTTP Server to Oracle Web Cache to multiple WebLogic Servers to the Oracle Database. It’s a way to uniquely identify a request across tiers.  According to the documentation it’s: The value of the ECID is a unique identifier that can be used to correlate individual events as being part of the same request execution flow. For example, events that are identified as being related to a particular request typically have the same ECID value.  The format of the ECID string itself is determined by an internal mechanism that is subject to change; therefore, you should not have or place any dependencies on that format. The novel idea that I see John had was to extend this concept beyond the diagnostic information that is captured by Fusion Middleware.  Why not also use this identifier in your logs and errors so you can correlate even more information together!  Your logging might already identify the user, so why not identify the request so you filter down even more.  All you need to do inside of WebLogic Server to get ahold of this information is invoke DiagnosticConextHelper: weblogic.diagnostics.context.DiagnosticContextHelper.getContextId() This class has other helpful methods to see other values tracked by the diagnostics framework too.  This way I can see even more detail and get information across tiers. In performance profiling, this can be very handy to track down where time is being spent in code.  I’ve blogged and made videos about this before.  JRockit Flight Recorder can use the WLDF Diagnostic Volume in WLS 10.3.3+ to automatically capture and correlate lots of helpful information for each request without installing any special agents and with the out-of-the-box JRockit and WLS settings!  You can see here how information is displayed in JRockit Flight Recorder about a single request as it calls a Servlet, which calls an EJB, which gets a DB connection, which starts a transaction, etc.  You can get timings around everything and even see the SQL that is used. http://download.oracle.com/docs/cd/E21764_01/web.1111/e13714/using_flightrecorder.htm#WLDFC480 Recent versions of the WLS console also are able to visualize this data too, so it works with other JVMs besides JRockit when you turn on WLDF instrumentation. I wrote a little sample application that verified to myself that the ECID did actually cross JVM boundaries.  I invoked a Servlet in one JVM, which acted as an EJB client to Stateless Session Bean running in another JVM.  Each call returned the same ECID.  You need to turn on WLDF Instrumentation for this to work otherwise the framework returns null.  I’m glad John put me on to this API as I have some interesting ideas on how to correlate some information together.

    Read the article

  • Folders in SQL Server Data Tools

    - by jamiet
    Recently I have begun a new project in which I am using SQL Server Data Tools (SSDT) and SQL Server Integration Services (SSIS) 2012. Although I have been using SSDT & SSIS fairly extensively while SQL Server 2012 was in the beta phase I usually find that you don’t learn about the capabilities and quirks of new products until you use them on a real project, hence I am hoping I’m going to have a lot of experiences to share on my blog over the coming few weeks. In this first such blog post I want to talk about file and folder organisation in SSDT. The predecessor to SSDT is Visual Studio Database Projects. When one created a new Visual Studio Database Project a folder structure was provided with “Schema Objects” and “Scripts” in the root and a series of subfolders for each schema: Apparently a few customers were not too happy with the tool arbitrarily creating lots of folders in Solution Explorer and hence SSDT has gone in completely the opposite direction; now no folders are created and new objects will get created in the root – it is at your discretion where they get moved to: After using SSDT for a few weeks I can safely say that I preferred the older way because I never used Solution Explorer to navigate my schema objects anyway so it didn’t bother me how many folders it created. Having said that the thought of a single long list of files in Solution Explorer without any folders makes me shudder so on this project I have been manually creating folders in which to organise files and I have tried to mimic the old way as much as possible by creating two folders in the root, one for all schema objects and another for Pre/Post deployment scripts: This works fine until different developers start to build their own different subfolder structures; if you are OCD-inclined like me this is going to grate on you eventually and hence you are going to want to move stuff around so that you have consistent folder structures for each schema and (if you have multiple databases) each project. Moreover new files get created with a filename of the object name + “.sql” and often people like to have an extra identifier in the filename to indicate the object type: The overall point is this – files and folders in your solution are going to change. Some version control systems (VCSs) don’t take kindly to files being moved around or renamed because they recognise the renamed/moved file simply as a new file and when they do that you lose the revision history which, to my mind, is one of the key benefits of using a VCS in the first place. On this project we have been using Team Foundation Server (TFS) and while it pains me to say it (as I am no great fan of TFS’s version control system) it has proved invaluable when dealing with the SSDT problems that I outlined above because it is integrated right into the Visual Studio IDE. Thus the advice from this blog post is: If you are using SSDT consider using an Visual-Studio-integrated VCS that can easily handle file renames and file moves I suspect that fans of other VCSs will counter by saying that their VCS weapon of choice can handle renames/file moves quite satisfactorily and if that’s the case…great…let me know about them in the comments. This blog post is not an attempt to make people use one particular VCS, only to make people aware of this issue that might rise when using SSDT. More to come in the coming few weeks! @jamiet

    Read the article

< Previous Page | 591 592 593 594 595 596 597 598 599 600 601 602  | Next Page >