Search Results

Search found 25877 results on 1036 pages for 'information driven'.

Page 537/1036 | < Previous Page | 533 534 535 536 537 538 539 540 541 542 543 544  | Next Page >

  • MVC 4 and the App_Start folder

    - by pjohnson
    I've been delving into ASP.NET MVC 4 a little since its release last month. One thing I was chomping at the bit to explore was its bundling and minification functionality, for which I'd previously used Cassette, and been fairly happy with it. MVC 4's functionality seems very similar to Cassette's; the latter's CassetteConfiguration class matches the former's BundleConfig class, specified in a new directory called App_Start.At first glance, this seems like another special ASP.NET folder, like App_Data, App_GlobalResources, App_LocalResources, and App_Browsers. But Visual Studio 2010's lack of knowledge about it (no Solution Explorer option to add the folder, nor a fancy icon for it) made me suspicious. I found the MVC 4 project template has five classes there--AuthConfig, BundleConfig, FilterConfig, RouteConfig, and WebApiConfig. Each of these is called explicitly in Global.asax's Application_Start method. Why create separate classes, each with a single static method? Maybe they anticipate a lot more code being added there for large applications, but for small ones, it seems like overkill. (And they seem hastily implemented--some declared as static and some not, in the base namespace instead of an App_Start/AppStart one.) Even for a large application I work on with a substantial amount of code in Global.asax.cs, a RouteConfig might be warranted, but the other classes would remain tiny.More importantly, it appears App_Start has no special magic like the other folders--it's just convention. I found it first described in the MVC 3 timeframe by Microsoft architect David Ebbo, for the benefit of NuGet and WebActivator; apparently some packages will add their own classes to that directory as well. One of the first appears to be Ninject, as most mentions of that folder mention it, and there's not much information elsewhere about this new folder.

    Read the article

  • Problems with both LightDM and GDM using DisplayLink USB monitor

    - by Austin
    When I use LightDM, it will auto-login to desktop just fine. The only problem is Compiz doesn't work, and menus don't work. I can't right-click the desktop, and I can't select program menus in the top bar (I.e clicking "File" does nothing). When I use GDM, I only get a blank blue screen and the mouse cursor. I can't Ctrl+Alt+Backspace to restart, but I can Ctrl+Alt+F1 and Ctrl+Alt+F7 to switch modes. I don't think it's auto-logging me in, but I'm not sure. It plays the login screen noise. Will update with more information when I get home! EDIT: Okay, so I did a fresh install, just to ensure I hadn't borked something playing in the console. I reconfigured my setup as I did before, with the same results. Here's what I followed. The only difference is that instead of setting "vga=normal nomodeset" I set "GRUB_GFXPAYLOAD_LINUX = text". Also I only have the DisplayLink monitor configured in my xorg.conf file. At this point I'm using the open radeon driver, although I used the proprietary ati driver before. I'm not sure if I'm having a problem with: - X configuration - Graphics driver - DisplayLink driver - Unity - LightDM - Compiz - Or something else The resolution of the monitor is 800x480, 16bit. I tried setting a larger virtual resolution of 1200x720 (because the real resolution is lower than the recommended resolution), but it causes Ubuntu to boot into low graphics mode. When I get home I'm going to install the fglrx driver and see if it enables virtual resolutions, which may further enable my window manager to function properly.

    Read the article

  • XFS and loss of data when power goes down

    - by culebrón
    Each time electricity goes down, my desktop (without UPS) loses some temporary information. Opera can lose settings, history, cache, or mail accounts (Thanks heavens I was wise to use IMAP). Partially or all together. a whole file (complete and save) in Geany appeared empty (and I didn't commit it to Git) rhythmbox lost all podcasts subscription data I'm afraid there are other losses I just didn't see. What's the reason? A memory files cache, a mem-disk? Or non-atomic file writes in xfs? I have Ubuntu 9.10 and XFS on both / and /home partitions. Is ext4 safer in such circumstances? I've seen ext3 is faster. Is it as safe as *4? Given that the apartment I rent is connected to a common bus and 1 safety switch for several apartments, and the neighbors - alone or together - overload it at least once every week, the lights go down often enough for this to be an issue.

    Read the article

  • Microsoft Sync Framework

    - by kaleidoscope
    Introduction It is a platform that enables collaboration and offline access for applications, services and devices. Sync framework features technologies and tools that enable roaming, data sharing and taking data offline. Moreover, developers can build synchronization ecosystems that integrate any application with data from any store, by using any protocol over any network. Highlights * Add sync support to new and existing applications, services and devices * Enable collaboration and offline capabilities for any application * Roam and share information form any data store, over any protocol and over any network configuration * Leverage sync capabilities exposed in Microsoft technologies to create sync ecosystems * Extend the architecture to support custom data types including files Benefits of using Sync Framework * An extensible model that lets you integrate multiple data sources into a synchronization ecosystems. * A managed API for all components and a native API for select components. * Conflict handling for automatic and custom resolution schemes. * Filters that let you synchronize a subset of data, such as only those files that contain images. * A compact and efficient metadata model that enables synchronization for virtually any participant, without significant changes to the data store: - Any data store     Add synchronization to a wide range of applications, services and devices. - Any data type     Introduce new data types to synchronize. - Any protocol     Use existing architectures and protocols to synchronize data. The transport – agnostic architecture allows integration of synchronization into a variety of protocols, including over-the-air and embedded devices. - Any network configuration     Enable synchronization for your applications, devices and services in true peer-to-peer or hub-and-spoke configurations. Easily recover from network interruptions. Reduce network traffic by efficiently selecting changes to synchronize. Technorati Tags: Anish Sharma,Microsoft Sync Framework

    Read the article

  • Guidance for Web XML Api's

    - by qstarin
    I have to create an API for our application that is accessible over HTTP. I envision the API's responses to be simple XML documents. It won't be a REST API (not in the strict sense of REST). I am fairly new to this space - of course I've had to consume some Web API's in my work, but often they are already wrapped in language native libraries (i.e., TweetSharp). I'm looking for information to guide the design of an API. Are there any articles, blog posts, etc. that review and expound upon the design choices to be made in a Web API? Design choices would be things like how to authenticate, URL structure, when users submit should the URL they POST to determine the action being performed or should all requests go to a common URL and some part of the POST'd data is responsible for routing to a command, should all responses have the same document root or should errors have a different root, etc., etc. Ideally, such articles or blog posts would enumerate through the common variations for any given point of design and expound on the advantages and disadvantages, such that they would inform me to make my own decision (as opposed to articles that simply explain one single way to do something). Does anyone have any links or wisdom they can share?

    Read the article

  • Per-vertex animation with VBOs: VBO per character or VBO per animation?

    - by charstar
    Goal To leverage the richness of well vetted animation tools such as Blender to do the heavy lifting for a small but rich set of animations. I am aware of additive pose blending like that from Naughty Dog and similar techniques but I would prefer to expend a little RAM/VRAM to avoid implementing a thesis-ready pose solver. I would also like to avoid implementing a key-frame + interpolation curve solver (reinventing Blender vertex groups and IPOs), if possible. Scenario Meshes are animated using either skeletons (skinned animation) or some form of morph targets (i.e. per-vertex key frames). However, in either case, the animations are known in full at load-time, that is, there is no physics, IK solving, or any other form of in-game pose solving. The number of character actions (animations) will be limited but rich (hand-animated). There may be multiple characters using a each mesh and its animations simultaneously in-game (they will likely be at different frames of the same animation at the same time). Assume color and texture coordinate buffers are static. Current Considerations Much like a non-shader-powered pose solver, create a VBO for each character and copy vertex and normal data to each VBO on each frame (VBO in STREAMING). Create one VBO for each animation where each frame (interleaved vertex and normal data) is concatenated onto the VBO. Then each character simply has a buffer pointer offset based on its current animation frame (e.g. pointer offset = (numVertices+numNormals)*frameNumber). (VBO in STATIC) Known Trade-Offs In 1 above: Each VBO would be small but there would be many VBOs and therefore lots of buffer binding and vertex copying each frame. Both client and pipeline intensive. In 2 above: There would be few VBOs therefore insignificant buffer binding and no vertex data getting jammed down the pipe each frame, but each VBO would be quite large. Are there any pitfalls to number 2 (aside from finite memory)? I've found a lot of information on what you can do, but no real best practices. Are there other considerations or methods that I am missing?

    Read the article

  • New Big Data Appliance Security Features

    - by mgubar
    The Oracle Big Data Appliance (BDA) is an engineered system for big data processing.  It greatly simplifies the deployment of an optimized Hadoop Cluster – whether that cluster is used for batch or real-time processing.  The vast majority of BDA customers are integrating the appliance with their Oracle Databases and they have certain expectations – especially around security.  Oracle Database customers have benefited from a rich set of security features:  encryption, redaction, data masking, database firewall, label based access control – and much, much more.  They want similar capabilities with their Hadoop cluster.    Unfortunately, Hadoop wasn’t developed with security in mind.  By default, a Hadoop cluster is insecure – the antithesis of an Oracle Database.  Some critical security features have been implemented – but even those capabilities are arduous to setup and configure.  Oracle believes that a key element of an optimized appliance is that its data should be secure.  Therefore, by default the BDA delivers the “AAA of security”: authentication, authorization and auditing. Security Starts at Authentication A successful security strategy is predicated on strong authentication – for both users and software services.  Consider the default configuration for a newly installed Oracle Database; it’s been a long time since you had a legitimate chance at accessing the database using the credentials “system/manager” or “scott/tiger”.  The default Oracle Database policy is to lock accounts thereby restricting access; administrators must consciously grant access to users. Default Authentication in Hadoop By default, a Hadoop cluster fails the authentication test. For example, it is easy for a malicious user to masquerade as any other user on the system.  Consider the following scenario that illustrates how a user can access any data on a Hadoop cluster by masquerading as a more privileged user.  In our scenario, the Hadoop cluster contains sensitive salary information in the file /user/hrdata/salaries.txt.  When logged in as the hr user, you can see the following files.  Notice, we’re using the Hadoop command line utilities for accessing the data: $ hadoop fs -ls /user/hrdataFound 1 items-rw-r--r--   1 oracle supergroup         70 2013-10-31 10:38 /user/hrdata/salaries.txt$ hadoop fs -cat /user/hrdata/salaries.txtTom Brady,11000000Tom Hanks,5000000Bob Smith,250000Oprah,300000000 User DrEvil has access to the cluster – and can see that there is an interesting folder called “hrdata”.  $ hadoop fs -ls /user Found 1 items drwx------   - hr supergroup          0 2013-10-31 10:38 /user/hrdata However, DrEvil cannot view the contents of the folder due to lack of access privileges: $ hadoop fs -ls /user/hrdata ls: Permission denied: user=drevil, access=READ_EXECUTE, inode="/user/hrdata":oracle:supergroup:drwx------ Accessing this data will not be a problem for DrEvil. He knows that the hr user owns the data by looking at the folder’s ACLs. To overcome this challenge, he will simply masquerade as the hr user. On his local machine, he adds the hr user, assigns that user a password, and then accesses the data on the Hadoop cluster: $ sudo useradd hr $ sudo passwd $ su hr $ hadoop fs -cat /user/hrdata/salaries.txt Tom Brady,11000000 Tom Hanks,5000000 Bob Smith,250000 Oprah,300000000 Hadoop has not authenticated the user; it trusts that the identity that has been presented is indeed the hr user. Therefore, sensitive data has been easily compromised. Clearly, the default security policy is inappropriate and dangerous to many organizations storing critical data in HDFS. Big Data Appliance Provides Secure Authentication The BDA provides secure authentication to the Hadoop cluster by default – preventing the type of masquerading described above. It accomplishes this thru Kerberos integration. Figure 1: Kerberos Integration The Key Distribution Center (KDC) is a server that has two components: an authentication server and a ticket granting service. The authentication server validates the identity of the user and service. Once authenticated, a client must request a ticket from the ticket granting service – allowing it to access the BDA’s NameNode, JobTracker, etc. At installation, you simply point the BDA to an external KDC or automatically install a highly available KDC on the BDA itself. Kerberos will then provide strong authentication for not just the end user – but also for important Hadoop services running on the appliance. You can now guarantee that users are who they claim to be – and rogue services (like fake data nodes) are not added to the system. It is common for organizations to want to leverage existing LDAP servers for common user and group management. Kerberos integrates with LDAP servers – allowing the principals and encryption keys to be stored in the common repository. This simplifies the deployment and administration of the secure environment. Authorize Access to Sensitive Data Kerberos-based authentication ensures secure access to the system and the establishment of a trusted identity – a prerequisite for any authorization scheme. Once this identity is established, you need to authorize access to the data. HDFS will authorize access to files using ACLs with the authorization specification applied using classic Linux-style commands like chmod and chown (e.g. hadoop fs -chown oracle:oracle /user/hrdata changes the ownership of the /user/hrdata folder to oracle). Authorization is applied at the user or group level – utilizing group membership found in the Linux environment (i.e. /etc/group) or in the LDAP server. For SQL-based data stores – like Hive and Impala – finer grained access control is required. Access to databases, tables, columns, etc. must be controlled. And, you want to leverage roles to facilitate administration. Apache Sentry is a new project that delivers fine grained access control; both Cloudera and Oracle are the project’s founding members. Sentry satisfies the following three authorization requirements: Secure Authorization:  the ability to control access to data and/or privileges on data for authenticated users. Fine-Grained Authorization:  the ability to give users access to a subset of the data (e.g. column) in a database Role-Based Authorization:  the ability to create/apply template-based privileges based on functional roles. With Sentry, “all”, “select” or “insert” privileges are granted to an object. The descendants of that object automatically inherit that privilege. A collection of privileges across many objects may be aggregated into a role – and users/groups are then assigned that role. This leads to simplified administration of security across the system. Figure 2: Object Hierarchy – granting a privilege on the database object will be inherited by its tables and views. Sentry is currently used by both Hive and Impala – but it is a framework that other data sources can leverage when offering fine-grained authorization. For example, one can expect Sentry to deliver authorization capabilities to Cloudera Search in the near future. Audit Hadoop Cluster Activity Auditing is a critical component to a secure system and is oftentimes required for SOX, PCI and other regulations. The BDA integrates with Oracle Audit Vault and Database Firewall – tracking different types of activity taking place on the cluster: Figure 3: Monitored Hadoop services. At the lowest level, every operation that accesses data in HDFS is captured. The HDFS audit log identifies the user who accessed the file, the time that file was accessed, the type of access (read, write, delete, list, etc.) and whether or not that file access was successful. The other auditing features include: MapReduce:  correlate the MapReduce job that accessed the file Oozie:  describes who ran what as part of a workflow Hive:  captures changes were made to the Hive metadata The audit data is captured in the Audit Vault Server – which integrates audit activity from a variety of sources, adding databases (Oracle, DB2, SQL Server) and operating systems to activity from the BDA. Figure 4: Consolidated audit data across the enterprise.  Once the data is in the Audit Vault server, you can leverage a rich set of prebuilt and custom reports to monitor all the activity in the enterprise. In addition, alerts may be defined to trigger violations of audit policies. Conclusion Security cannot be considered an afterthought in big data deployments. Across most organizations, Hadoop is managing sensitive data that must be protected; it is not simply crunching publicly available information used for search applications. The BDA provides a strong security foundation – ensuring users are only allowed to view authorized data and that data access is audited in a consolidated framework.

    Read the article

  • Automatically select last row in a set in Excel

    - by Luke
    In Excel 2003, I am trying to keep track of some petty cash, and have it set up with the denomination along the top row, along with a sub total and difference column. I want a small section that shows how many rolls of coins I should have, by taking the total amount, and dividing it by however many should be in a roll, and rounding to the lowest whole number. That part is fine. What I want done is for that ONE section (how many rolls) I should have, based on the last row that has information in it. For example, if the last row is row 13, it should read the data from B13, C13, D13, etc. I don't mind learning Macros, if that's what the solution requires. I don't want to be manually selecting the last row each time though, I just want the worksheet to know automatically.

    Read the article

  • quick look at: dm_db_index_physical_stats

    - by fatherjack
    A quick look at the key data from this dmv that can help a DBA keep databases performing well and systems online as the users need them. When the dynamic management views relating to index statistics became available in SQL Server 2005 there was much hype about how they can help a DBA keep their servers running in better health than ever before. This particular view gives an insight into the physical health of the indexes present in a database. Whether they are use or unused, complete or missing some columns is irrelevant, this is simply the physical stats of all indexes; disabled indexes are ignored however. In it’s simplest form this dmv can be executed as:   The results from executing this contain a record for every index in every database but some of the columns will be NULL. The first parameter is there so that you can specify which database you want to gather index details on, rather than scan every database. Simply specifying DB_ID() in place of the first NULL achieves this. In order to avoid the NULLS, or more accurately, in order to choose when to have the NULLS you need to specify a value for the last parameter. It takes one of 4 values – DEFAULT, ‘SAMPLED’, ‘LIMITED’ or ‘DETAILED’. If you execute the dmv with each of these values you can see some interesting details in the times taken to complete each step. DECLARE @Start DATETIME DECLARE @First DATETIME DECLARE @Second DATETIME DECLARE @Third DATETIME DECLARE @Finish DATETIME SET @Start = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, DEFAULT) AS ddips SET @First = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'SAMPLED') AS ddips SET @Second = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'LIMITED') AS ddips SET @Third = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ddips SET @Finish = GETDATE() SELECT DATEDIFF(ms, @Start, @First) AS [DEFAULT] , DATEDIFF(ms, @First, @Second) AS [SAMPLED] , DATEDIFF(ms, @Second, @Third) AS [LIMITED] , DATEDIFF(ms, @Third, @Finish) AS [DETAILED] Running this code will give you 4 result sets; DEFAULT will have 12 columns full of data and then NULLS in the remainder. SAMPLED will have 21 columns full of data. LIMITED will have 12 columns of data and the NULLS in the remainder. DETAILED will have 21 columns full of data. So, from this we can deduce that the DEFAULT value (the same one that is also applied when you query the view using a NULL parameter) is the same as using LIMITED. Viewing the final result set has some details that are worth noting: Running queries against this view takes significantly longer when using the SAMPLED and DETAILED values in the last parameter. The duration of the query is directly related to the size of the database you are working in so be careful running this on big databases unless you have tried it on a test server first. Let’s look at the data we get back with the DEFAULT value first of all and then progress to the extra information later. We know that the first parameter that we supply has to be a database id and for the purposes of this blog we will be providing that value with the DB_ID function. We could just as easily put a fixed value in there or a function such as DB_ID (‘AnyDatabaseName’). The first columns we get back are database_id and object_id. These are pretty explanatory and we can wrap those in some code to make things a little easier to read: SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName] … FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips  gives us   SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName], [i].[name] AS [IndexName] , ….. FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips INNER JOIN [sys].[indexes] AS i ON [ddips].[index_id] = [i].[index_id] AND [ddips].[object_id] = [i].[object_id]     These handily tie in with the next parameters in the query on the dmv. If you specify an object_id and an index_id in these then you get results limited to either the table or the specific index. Once again we can place a  function in here to make it easier to work with a specific table. eg. SELECT * FROM [sys].[dm_db_index_physical_stats] (DB_ID(), OBJECT_ID(‘AdventureWorks2008.Person.Address’) , 1, NULL, NULL) AS ddips   Note: Despite me showing that functions can be placed directly in the parameters for this dmv, best practice recommends that functions are not used directly in the function as it is possible that they will fail to return a valid object ID. To be certain of not passing invalid values to this function, and therefore setting an automated process off on the wrong path, declare variables for the OBJECT_IDs and once they have been validated, use them in the function: DECLARE @db_id SMALLINT; DECLARE @object_id INT; SET @db_id = DB_ID(N’AdventureWorks_2008′); SET @object_id = OBJECT_ID(N’AdventureWorks_2008.Person.Address’); IF @db_id IS NULL BEGINPRINT N’Invalid database’; ENDELSE IF @object_id IS NULL BEGINPRINT N’Invalid object’; ENDELSE BEGINSELECT * FROM sys.dm_db_index_physical_stats (@db_id, @object_id, NULL, NULL , ‘LIMITED’); END; GO In cases where the results of querying this dmv don’t have any effect on other processes (i.e. simply viewing the results in the SSMS results area)  then it will be noticed when the results are not consistent with the expected results and in the case of this blog this is the method I have used. So, now we can relate the values in these columns to something that we recognise in the database lets see what those other values in the dmv are all about. The next columns are: We’ll skip partition_number, index_type_desc, alloc_unit_type_desc, index_depth and index_level  as this is a quick look at the dmv and they are pretty self explanatory. The final columns revealed by querying this view in the DEFAULT mode are avg_fragmentation_in_percent. This is the amount that the index is logically fragmented. It will show NULL when the dmv is queried in SAMPLED mode. fragment_count. The number of pieces that the index is broken into. It will show NULL when the dmv is queried in SAMPLED mode. avg_fragment_size_in_pages. The average size, in pages, of a single fragment in the leaf level of the IN_ROW_DATA allocation unit. It will show NULL when the dmv is queried in SAMPLED mode. page_count. Total number of index or data pages in use. OK, so what does this give us? Well, there is an obvious correlation between fragment_count, page_count and avg_fragment_size-in_pages. We see that an index that takes up 27 pages and is in 3 fragments has an average fragment size of 9 pages (27/3=9). This means that for this index there are 3 separate places on the hard disk that SQL Server needs to locate and access to gather the data when it is requested by a DML query. If this index was bigger than 72KB then having it’s data in 3 pieces might not be too big an issue as each piece would have a significant piece of data to read and the speed of access would not be too poor. If the number of fragments increases then obviously the amount of data in each piece decreases and that means the amount of work for the disks to do in order to retrieve the data to satisfy the query increases and this would start to decrease performance. This information can be useful to keep in mind when considering the value in the avg_fragmentation_in_percent column. This is arrived at by an internal algorithm that gives a value to the logical fragmentation of the index taking into account the multiple files, type of allocation unit and the previously mentioned characteristics if index size (page_count) and fragment_count. Seeing an index with a high avg_fragmentation_in_percent value will be a call to action for a DBA that is investigating performance issues. It is possible that tables will have indexes that suffer from rapid increases in fragmentation as part of normal daily business and that regular defragmentation work will be needed to keep it in good order. In other cases indexes will rarely become fragmented and therefore not need rebuilding from one end of the year to another. Keeping this in mind DBAs need to use an ‘intelligent’ process that assesses key characteristics of an index and decides on the best, if any, defragmentation method to apply should be used. There is a simple example of this in the sample code found in the Books OnLine content for this dmv, in example D. There are also a couple of very popular solutions created by SQL Server MVPs Michelle Ufford and Ola Hallengren which I would wholly recommend that you review for much further detail on how to care for your SQL Server indexes. Right, let’s get back on track then. Querying the dmv with the fifth parameter value as ‘DETAILED’ takes longer because it goes through the index and refreshes all data from every level of the index. As this blog is only a quick look a we are going to skate right past ghost_record_count and version_ghost_record_count and discuss avg_page_space_used_in_percent, record_count, min_record_size_in_bytes, max_record_size_in_bytes and avg_record_size_in_bytes. We can see from the details below that there is a correlation between the columns marked. Column 1 (Page_Count) is the number of 8KB pages used by the index, column 2 is how full each page is (how much of the 8KB has actual data written on it), column 3 is how many records are recorded in the index and column 4 is the average size of each record. This approximates to: ((Col1*8) * 1024*(Col2/100))/Col3 = Col4*. avg_page_space_used_in_percent is an important column to review as this indicates how much of the disk that has been given over to the storage of the index actually has data on it. This value is affected by the value given for the FILL_FACTOR parameter when creating an index. avg_record_size_in_bytes is important as you can use it to get an idea of how many records are in each page and therefore in each fragment, thus reinforcing how important it is to keep fragmentation under control. min_record_size_in_bytes and max_record_size_in_bytes are exactly as their names set them out to be. A detail of the smallest and largest records in the index. Purely offered as a guide to the DBA to better understand the storage practices taking place. So, keeping an eye on avg_fragmentation_in_percent will ensure that your indexes are helping data access processes take place as efficiently as possible. Where fragmentation recurs frequently then potentially the DBA should consider; the fill_factor of the index in order to leave space at the leaf level so that new records can be inserted without causing fragmentation so rapidly. the columns used in the index should be analysed to avoid new records needing to be inserted in the middle of the index but rather always be added to the end. * – it’s approximate as there are many factors associated with things like the type of data and other database settings that affect this slightly.  Another great resource for working with SQL Server DMVs is Performance Tuning with SQL Server Dynamic Management Views by Louis Davidson and Tim Ford – a free ebook or paperback from Simple Talk. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • apt-get install phpmyadmin on debian doesn't install /etc/phpmyadmin/apache.conf

    - by Christian Nikkanen
    I'm trying to install phpmyadmin on my webserver, using this guide: http://www.howtoforge.com/ubuntu_debian_lamp_server I did that once, and it worked like a dream, but I hated the looks of phpmyadmin (maybe the oldest layout ever) and decided to delete it, and didn't know that deleting is done with apt-get remove phpmyadmin and did in phpmyadmin directory rm * and thought that it's done. However, as I can't find the debian build of phpmyadmin anywhere, I want to install it again, but when I add Include /etc/phpmyadmin/apache.conf to /etc/apache2/apache2.conf, and restart apache, it give's me this error: apache2: Syntax error on line 73 of /etc/apache2/apache2.conf: Could not open configuration file /etc/phpmyadmin/apache.conf: No such file or directory Action 'configtest' failed. The Apache error log may have more information. failed! No matter how I try, I always get this error, and phpmyadmin isn't there.

    Read the article

  • SCVMM upgrade scenario

    - by pigeon
    I've read some information on TechNet about upgrading SCVMM 2008 - 2012 but can't quite figure out the best way to approach this. The current setup is that we've got SCVMM 2008 R2 installed but against best practice it was actually installed on the Hyper-V host machine since its a small scale deployment its just a single server setup with SCVMM existing on the same host rather than be in a VM. So from what I've read an in-place should be possible which will incur a restart but also don't have the luxury of another server to shift the VMs onto whilst doing this or want to risk anything happening to the Hyper-V role. Ideally I would probably prefer just to get SCVMM 2012 into a VM of its own and remove the 2008 version from the host machine. Anyone done an upgrade on this or have any recommendations about how to approach this?

    Read the article

  • Ask the Readers: Which Google Services Do You Use?

    - by Asian Angel
    Nearly everyone uses at least one of Google’s services while browsing each day. What we want to know this week is which Google services do you use? Image by adria.richards Google offers a multitude of services such as e-mail, calendar, and docs to help you manage your online life. Some of you may only use a few of the available services while others are power users. A fair number of businesses and schools have also switched over to Google apps and services for their organization. Whether it is at home, work, or both Google has become a part of our daily lives. Being able to access everything in one place can be extremely useful but equally frustrating if Google’s services experience any downtime. Another concern for some people is the issue of privacy over having so much information stored by a single company. Ultimately the final decision lies with you. Which Google services do you use at home or at work? Let us know in the comments! Similar Articles Productive Geek Tips Access Your Favorite Google Services in Chrome the Easy WayFinding RSS Subscriber Counts Through Apache LogsQuick and Easy Access to Your Favorite Google Services with GButtsAsk the Readers: Which Search Engine Do You Use?A Few Things I’ve Learned from Writing at How-To Geek TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Identify Fonts using WhatFontis.com Windows 7’s WordPad is Actually Good Greate Image Viewing and Management with Zoner Photo Studio Free Windows Media Player Plus! – Cool WMP Enhancer Get Your Team’s World Cup Schedule In Google Calendar Backup Drivers With Driver Magician

    Read the article

  • delete key on linux does not work

    - by Gauthier Fleutot
    Hi! My delete key does not work in ubuntu, it does nothing. I understand that this is a common problem, but I could solve it with the information I found elsewhere. I ran xev. Pressing the 'a' key gives: KeyRelease event, serial 30, synthetic NO, window 0x2c00001, root 0x1a6, subw 0x0, time 7255643, (-113,-107), root:(425,300), state 0x2010, keycode 38 (keysym 0x61, a), same_screen YES, XLookupString gives 1 bytes: (61) "a" XFilterEvent returns: False Pressing 'Delete' gives: FocusOut event, serial 30, synthetic NO, window 0x2c00001, mode NotifyGrab, detail NotifyAncestor FocusIn event, serial 30, synthetic NO, window 0x2c00001, mode NotifyUngrab, detail NotifyAncestor KeymapNotify event, serial 30, synthetic NO, window 0x0, keys: 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 From there I don't know what to do. Help?

    Read the article

  • How to become an Oracle SOA expert?

    - by Jürgen Kress
    Our SOA business continues to grow – more and more open SOA jobs are posted – you want to participate and become and Oracle SOA expert? Follow the five steps to your SOA success: Joint the Oracle SOA Partner Community: You will receive our monthly newsletter with all the training, marketing and sales material and the latest product updates SOA Partner Community Forum: Keynotes delivered by our VP for product management and sales, breakouts by our experts and make sure you sign up for the hands-on bootcamps! The Forum is the best place to network with the Community & ACEs to exchange your project experience! Blog: Latest updates from the SOA Community on a weekly base for additional blogs please visit our wiki Books: What you must read when you start with Oracle SOA Suite! SOA & Application Grid Specialization eBook and the SOA & Application Grid Specialization Checklist to become an certified expert and partner! See you in Utrecht Jürgen Kress For more information on SOA Specialization and the SOA Partner Community please feel free to register at www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Wiki Website Technorati Tags: Oracle SOA Expert,Oracle,SOA,SOA Community,SOA Partner Community,OPN,Jürgen Kress

    Read the article

  • Detect two specific objects collision with bullet physics

    - by sebap123
    I have got some problem with defining collision between objects in my game using bullet physics. I know that objects are colliding with each other simultaneously and I don't have to do anything more. However I need to be noticed when one object collides with one of the rest. It is quite awkward written so I will tell what I want to achive. I have got ball which hits wall from tubes. Everything is on the floor. When ball hits wall some fragments fall down to infinity. So I have got bellow floor btStaticPlaneShape. This is place where most of objects is stoping and then I can start another action. But not all of them. So I've been trying to use function checkCollideWith but it isn't good method as it was said in reference and wiki. So I've checked method described in wiki http://bulletphysics.org/mediawiki-1.5.8/index.php/Collision_Callbacks_and_Triggers called contact information. This isn't good method either because it is extremly hard to identify what is what when colliding. You have to also remember that ball is almost all the time colliding with something - floor, wall or eart level. So is there any other method to check what is colliding with what?

    Read the article

  • Bug in CDP implementation

    - by Suraj
    We are developing a Linux based ethernet switch which has 6 ports. We are done with CDP protocol. I have connected a Cisco device to port 2. When I quiery for the Cisco device, I get the reply and instead of getting lan1 (port 1 - lan0 .. port 6 = lan5), I always get the interface name as eth0. The same is the case for all the ports. What changes are required to get the correct interface name? I will be very thankful for the information. The snap packet is received in the routine snap_rcv() in the file "linux._2.6.XX/net/802/psnap.c"; Regards, Suraj..

    Read the article

  • How to make Xbox joypad work with Bastion?

    - by Alistair Buxton
    I have an XBox joypad: Bus 005 Device 004: ID 045e:0289 Microsoft Corp. Xbox Controller S When I start Bastion from the terminal the following is output: Number of joysticks: 1 Number of buttons for joystick: 0 - 10 Number of axes for joystick: 0 - 6 Number of PovHats for joystick: 0 - 1 When I load up the game it displays a message "press any key" and at this point, if I press a button on the joypad it advances to the main menu. However, the up/down/left/right controls do not work and the button will not operate the menu. When I enter the control configuration, the joypad section is disabled and displays a message "joypad not detected." If I enter the control customization and try to reconfigure one of the controls, noises can be heard when pressing joypad buttons, but the input is otherwise ignored. Further information which may or may not be relevant: My controller is an original Xbox controller, not a 360 controller. XNA games on Windows apparently only work with Xbox360 controllers because they use xinput rather than direct input, see eg here. My controller works (almost) properly with MonoGame trunk samples, but Bastion uses a modified MonoGame and crashes when run against trunk, so I can't add debugging to see where the problem is. Bug can also be reproduced with a Xbox 360 wired controller.

    Read the article

  • Unix Server Partitioning & Filesystem Layout

    - by user1717735
    There's a lot of contradictory information about Unix server partitioning out on the internet, so I need some advice on how to proceed. So far, on the servers I in our test environment I didn't really care about partitioning and I configured a single monolithic / plus a swap partition. This partitioning scheme doesn't seem like a good idea for our production servers. I have found a good starting point here, but it seems very vague on the details. Basically I have a server on which I will be running a basic LAMP stack (Apache, PHP, and MySQL). It will have to handle file uploads (up to 2GB). The system has a 2TB RAID 1 array. I plan to set : / 100GB /var 1000GB (apache files and mysql files will be here), /tmp 800GB (handles the php tmp file) /home 96GB swap 4GB Does this sound sane, or am I over-complicating things?

    Read the article

  • One bigger Virtual Machine in Cloud

    - by flyer
    I just setup virtual machines on one hardware with Vagrant (this is just a test environment, not production!). I want to use a Puppet to configure them and next try to setup OpenStack. I am not sure If I am understanding how this should look at the end. Is it possible to have below architecture with OpenStack after all where I will run one Virtual Machine with e.g. 12 cores? ------------------------------- | VM (12c) | ------------------------------- | NOVA | NOVA | NOVA | ------------------------------- | OpenStack | ------------------------------- | VM (4c) | VM (4c) | VM (4c) | ------------------------------- | Bare Metal (8c) | ------------------------------- I need this information to have a bigger picture to continue.

    Read the article

  • VSS error 12293 after system disk clone (Win2003)

    - by carlpett
    Hi! After cloning a windows 2003 installation from a single drive onto two mirrored drives using Acronis Disk Director, VSS no longer works, filing events 12293 and 7001 when trying to use backup tools, and additionally giving error 0x8004230f when accessing the Shadow copy tab of disk properties. I've google-researched this quite throughly, and found a suggested fix[1]: replacing the MBR signature of the disk. This would cause windows to invalidate old shadow copy information, which supposedly would make it all work again. However, I am a bit nervous over this... Is there a possiblity of messing this up somehow, because of the mbr originating from a single disk install, and now residing on a raid mirror? Has anyone here had this problem and solved it? This method or another? [1] http://kb.backupassist.com/articles.php?aid=2971 (under header Resolution 2)

    Read the article

  • What to do if my computer is infected by a virus or a malware?

    - by Gnoupi
    This question is coming back often, and the suggested steps are very often the same. In an objective to concentrate useful information in one place, here is a community wiki about it. I expect our best minds to participate in it, so we can have a reference of knowledge, for something like this. What to do if it seems that my computer is infected by a virus, or a malware on a Windows system? What are the symptoms from an infection? What are the steps to follow after noticing an infection? How can I get rid of it? As it is community wiki, feel free to edit this question to improve it as well.

    Read the article

  • New PeopleSoft Applications Search

    - by Matthew Haavisto
    As you may have seen from the PeopleTools 8.52 Release Value Proposition , PeopleTools intends to introduce a new search capability in release 8.52. We believe this feature will not only improve the ability of users to find content, but will fundamentally change the way people navigate around the PeopleSoft ecosystem. PeopleSoft applications will be delivering this new search in coming releases and feature packs. PeopleSoft Application Search is actually a framework—a group of features that provides an improved means of searching for a variety of content across PeopleSoft applications. From a user experience perspective, the new search offers a powerful, keyword-based search presented in a familiar, intuitive user experience. Rather than browsing through long menu hierarchies to find a page, data item, or transaction, users can use PeopleSoft Application Search to directly navigate to desired locations. We envision this to be similar to how people navigate across the internet. This capability may reduce or even eliminate the need to navigate PeopleSoft applications using the existing application menu system (though menus will still be available to people that prefer that method). The new search will be available at any point in an application and can be configured to span multiple PeopleSoft applications. It enables users to initiate transactions or navigate to key information without using the PeopleSoft application menus. In addition, filters and facets will enable people to narrow their search results sets, making it easier to identify and navigate to desired application content. Action menus are embedded directly in the search results, allowing users to navigate straight to specific related transactions – pre-populated with the selected search results data. PeopleSoft Applications Search framework uses Oracle’s Secure Enterprise Search as its search engine. Most Customers will benefit from the new search when it is delivered with applications. However, customers can start deploying it after a Tools-only upgrade. In this case, however, customers would have to create their own indices and implement security.

    Read the article

  • Internet Explorer 9 Preview 2 link + webcasts for developers

    - by Eric Nelson
    At Web Directions last week in London (10th and 11th June 2010) I promised several folks I would put up a blog post to more information on IE 9.0. True to my word (albeit a little later than I had hoped), here is what I was thinking of: Install First up, Install Preview 2 and try out the demos I was showing at the conference. Remember that IE9 Preview installs side by side with IE8/7 etc. It is not a beta nor is it intended to be a full browser. It is a … preview :-)   Including good old SVG-oids :-) Learn And then check out the following webcasts which were recorded in March this year at MIX: In-Depth Look At Internet Explorer 9 Presenter:  Ted Johnson & John Hrvatin VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/CL28 Slides: Download Videos: MP4 Small WMV Large WMV High Performance Best Practices For Web Sites Presenter: Jason Weber VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/CL29 Slides: Download Videos: MP4 Small WMV Large WMV HTML5: Cross Browser Best Practices Presenter: Tony Ross VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/CL27 Slides: Download Videos: MP4 Small WMV Large WMV Internet Explorer Developer Tools Presenter: Jon Seitel VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/FT51 Slides: Download Videos: MP4 Small WMV Large WMV SVG: The Past, Present And Future of Vector Graphics For The Web Presenter: Patrick Dengler, Doug Schepers VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/EX30 Slides: Download Videos: MP4 Small WMV Large WMV Day 2 Keynote containing IE9 Presenter: Dean Hachamovitch VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/KEY02 Slides: Download Videos: MP4 Small WMV Large WMV

    Read the article

  • kubuntu 12.10 will not boot on mac 2.93Ghz intel core 2 duo

    - by Jake Sweet
    I feel like I've tried it all and nothing is changing. I've tried booting from a liveUSB, a liveDVD, and I've checked the mod5 everything matches up. I've even tried different distro's same result on all of them. Just for reference: linuxmint 13kde and Fedora 17. I've also tried changing my liveUSB building software just in case. I've tried unetbootin and Linux USB builder. Both have same results, my opinion is that it is a hardware issue since I'm having near the same result with all of these variables. So now what is actually happening? I can boot up to a screen. I say A screen because some of the ways that dvd's and usb's boot differs. Now on liveusb I'm reaching a black screen with white text. Says booting: done, then below says loading ramdrive: done, then below that it says preparing to boot kernel this may take a while and buckle in or something to that effect. Then nothing. That's it computer freezes. I've waited up to 8 hrs and still nothing. Ok for the liveDVD Everything goes according to instructions per pdf files on every distro, until linux starts. I can only run in compatibility mode. When any other option is tried the computer seems to freeze/stall/be a pain in my butt... Ok well that seems to wrap it up. Also if I'm not explaining something well, I'm sorry I can try to clear anything up. I'm not the best at descriptions. I'm leaving with a tech specs of my mac: 2.93GHz Intel Core 2 Duo, 4 GB ram, NVIDIA GeForce GT 120 graphics, bought in late 09" it's the 24" model, let me know if anymore information will help. Also thanks in advance

    Read the article

  • CodePlex Daily Summary for Sunday, March 18, 2012

    CodePlex Daily Summary for Sunday, March 18, 2012Popular ReleasesRiP-Ripper & PG-Ripper: RiP-Ripper 2.9.28: changes NEW: Added Support for "PixHub.eu" linksSmartNet: V1.0.0.0: DY SmartNet ?????? V1.0callisto: callisto 2.0.21: Added an option to disable local host detection.Javascript .NET: Javascript .NET v0.6: Upgraded to the latest stable branch of v8 (/tags/3.9.18), and switched to using their scons build system. We no longer include v8 source code as part of this project's source code. Simultaneous multithreaded use of v8 now supported (v8 Isolates), although different contexts may not share objects or call each other. 64-bit .Net 4.0 DLL now included. (Download now includes x86 and x64 for both .Net 3.5 and .Net 4.0.)MyRouter (Virtual WiFi Router): MyRouter 1.0.6: This release should be more stable there were a few bug fixes including the x64 issue as well as an error popping up when MyRouter started this was caused by a NULL valuePulse: Pulse Beta 4: This version is still in development but should include: Logging and error handling have been greatly improved. If you run into an error or Pulse crashes make sure to check the Log folder for a recently modified log file so you can report the details of the issue A bunch of new features for the Wallbase.cc provider. Cleaner separation between inputs, downloading and output. Input and downloading are fairly clean now but outputs are still mixed up in the mix which I'm trying to resolve ...Google Books Downloader for Windows: Google Books Downloader-2.0.0.0.: Google Books DownloaderFinestra Virtual Desktops: 2.5.4501: This is a very minor update release. Please see the information about the 2.5 and 2.5.4500 releases for more information on recent changes. This update did not even have an automatic update triggered for it. Adds error checking and reporting to all threads, not only those with message loopsAcDown????? - Anime&Comic Downloader: AcDown????? v3.9.2: ?? ●AcDown??????????、??、??????,????1M,????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。??????AcPlay?????,??????、????????????????。 ● AcDown???????????????????????????,???,???????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86),?????"?????????"??? ??????????????,??????????: ??"AcDo...ArcGIS Editor for OpenStreetMap: ArcGIS Editor for OSM 2.0 Release Candidate: Your feedback is welcome - and this is your last chance to get your fixes in for this version! Includes installer for both Feature Server extension and Desktop extension, enhanced functionality for the Desktop tools, and enhanced built-in Javascript Editor for the Feature Server component. This release candidate includes fixes to beta 4 that accommodate domain users for setting up the Server Component, and fixes for reporting/uploading references tracked in the revision table. See Code In-P...C.B.R. : Comic Book Reader: CBR 0.6: 20 Issue trackers are closed and a lot of bugs too Localize view is now MVVM and delete is working. Added the unused flag (take care that it goes to true only when displaying screen elements) Backstage - new input/output format choice control for the conversion Backstage - Add display, behaviour and register file type options in the extended options dialog Explorer list view has been transformed to a custom control. New group header, colunms order and size are saved Single insta...Windows Azure Toolkit for Windows 8: Windows Azure Toolkit for Windows 8 Consumer Prv: Windows Azure Toolkit for Windows 8 Consumer Preview - Preview Release v1.2.1Minor updates to setup experience: Check for WebPI before install Dependency Check updated to support the following VS 11 and VS 2010 SKUs Ultimate, Premium, Professional and Express Certs Windows Azure Toolkit for Windows 8 Consumer Preview - Preview Release v1.2.0 Please download this for Windows Azure Toolkit for Windows 8 functionality on Windows 8 Consumer Preview. The core features of the toolkit include:...Facebook Graph Toolkit: Facebook Graph Toolkit 3.0: ships with JSON Toolkit v3.0, offering parse speed up to 10 times of last version supports Facebook's new auth dialog supports new extend access token endpoint new example Page Tab app filter Graph Api connections using dates fixed bugs in Page Tab appsCODE Framework: 4.0.20312.0: This version includes significant improvements in the WPF system (and the WPF MVVM/MVC system). This includes new styles for Metro controls and layouts. Improved color handling. It also includes an improved theme/style swapping engine down to active (open) views. There also are various other enhancements and small fixes throughout the entire framework.ScintillaNET: ScintillaNET 2.4: 3/12/2012 Jacob Slusser Added support for annotations. Issues Fixed with this Release Issue # Title 25012 25012 25018 25018 25023 25023 25014 25014 Visual Studio ALM Quick Reference Guidance: v3 - For Visual Studio 11: RELEASE README Welcome to the BETA release of the Quick Reference Guide preview As this is a BETA release and the quality bar for the final Release has not been achieved, we value your candid feedback and recommend that you do not use or deploy these BETA artifacts in a production environment. Quality-Bar Details Documentation has been reviewed by Visual Studio ALM Rangers Documentation has not been through an independent technical review Documentation ...AvalonDock: AvalonDock 2.0.0345: Welcome to early alpha release of AvalonDock 2.0 I've completely rewritten AvalonDock in order to take full advantage of the MVVM pattern. New version also boost a lot of new features: 1) Deep separation between model and layout. 2) Full WPF binding support thanks to unified logical tree between main docking manager, auto-hide windows and floating windows. 3) Support for Aero semi-maximized windows feature. 4) Support for multiple panes in the same floating windows. For a short list of new f...Windows Azure PowerShell Cmdlets: Windows Azure PowerShell Cmdlets 2.2.2: Changes Added Start Menu Item for Easy Startup Added Link to Getting Started Document Added Ability to Persist Subscription Data to Disk Fixed Get-Deployment to not throw on empty slot Simplified numerous default values for cmdlets Breaking Changes: -SubscriptionName is now mandatory in Set-Subscription. -DefaultStorageAccountName and -DefaultStorageAccountKey parameters were removed from Set-Subscription. Instead, when adding multiple accounts to a subscription, each one needs to be added ...IronPython: 2.7.2.1: On behalf of the IronPython team, I'm happy to announce the final release IronPython 2.7.2. This release includes everything from IronPython 54498 and 62475 as well. Like all IronPython 2.7-series releases, .NET 4 is required to install it. Installing this release will replace any existing IronPython 2.7-series installation. Unlike previous releases, the assemblies for all supported platforms are included in the installer as well as the zip package, in the "Platforms" directory. IronPython 2...Kooboo CMS: Kooboo CMS 3.2.0.0: Breaking changes: When upgrade from previous versions, MUST reset the all the content type templates, otherwise the content manager might get a compile error. New features Integrate with Windows azure. See: http://wiki.kooboo.com/?wiki=Kooboo CMS on Azure Complete solution to deploy on load balance servers. See: http://wiki.kooboo.com/?wiki=Kooboo CMS load balance Update Jquery and Jquery ui to the lastest version(Jquery 1.71, Jquery UI 1.8.16). Tree style text content editing. See:h...New Projects4sr4ss1: Assignment 1 WDT Due date: 26 March 2012ADLN: Project to ADLNbook2guest: book2guestBundlingTweaks: BundlingTweaks makes it easier for developers to develop ASP.NET MVC 4 projects with Bundling/Minification support. You can now take advantage of Bundling, but still see non-minified JavaScript when debugging.COBOL SCREENSAVER: This project will show how to create a Windows screen saver using 1) an object-oriented isCOBOL GUI application, 2) an OpenCobol Windows native executable wrapper for the isCOBOL GUI, and 3) XML, XSLT, HTML5, and a Java FX 2.0 web browser component. Release 0.1 consists of a Java GUI application, the OpenCobol wrapper, and a README file detailing usage, system requirements, and installation of the recommended OpenCobol binaries for Windows.Colorado Time System Get CTS Results to Text File: This application will query a Colorado Time System 5, retrieve the lane results by event and heat. These can then be written to a text file.CommonEventLog: This project provides a simple means of logging messages and exceptions to a custom event log.ContainerTrack: Application for tracking container locationContent Widgets Orchard module: This Orchard module makes it possible to add arbitrary widgets to content types (with the option to disable per item).Cup of Badminton: This is a application for draw cup of badminton. You can insert, edit and delete player. You can choose many variant of draw.FSGREE: FSGREEKeepSafety: ??、??、?????????LibGrafx: LibGrafix is a C++ DLL that contains several classes useful when developing computer graphics applications, particularly OpenGL applications, in the Windows environment. It also includes a class that loads and displays the ModelX format exported by the XNA Model Viewer program. Linq to SQL code generator for Windows Phone: Linq to SQL code generator for Windows Phone is an add-in for Visual Studio 2010 or later to generate the Linq to SQL classes from a DBML diagram. Offline Navigation for Windows Phone 7: This is a simple implementation of offline navigation for windows phone 7 with map offset adjustment support.PayrollSystemRedux: Payroll case study book Robert Martin (Agile PPP in C#)PhoneFlipMenu Control for Windows Phone: A control that mimics the behaviour of the Email app's reply/reply all/forward menu.QTscenarist: QTscenarist makes it easier for Synfig animation to create Synfig movies. You can write scenaries and create your movie with it. It's developed in QT. recycle: This is a recycle web pageSharePoint 2010 Accordian Launch: SharePoint 2010 and SharePoint Foundation Quick Launch Accordian Solution.SharePoint 2010 JavaScript Registration Panel: The SharePoint 2010 JavaScript Registration Panel allows you to add JavaScript files to a site collection without SharePoint 2010 Designer intervention. This is helpful in scenarios where SharePoint Designer 2010 is disabled globally, but you have to add some JavaScript files (e.g. jQuery) to your site collection for every page load. The aim of this project is to provide following features: - load JavaScript files on every page load of your site collection - define the sequence for loadi...Sistema de Gestion Escolar: Projecto final monografico universidad dominicana O&MSmartNet: DYSmartNet????????????????????????。???TCP??,??????????,??????????... solution: solution team explorerTanmiaGrp: This is the tanmia base project library.VideoMobileSteraming: Will Update soonXNASter: XNASter is a short library that allows beginner developers to create simple XNA games. Right now it is in the very early stages of development. Included are the following parts: * 2D generic sprite-based object that supports movement, acceleration, etc. In the future, we are also planning to support: * KinectSkeleton controller Together with the project, there are also a few sample games that show how to use it.???WP: HateCoWP is Hatena Coco Client.???: ????????????????????。

    Read the article

< Previous Page | 533 534 535 536 537 538 539 540 541 542 543 544  | Next Page >