Search Results

Search found 67075 results on 2683 pages for 'data model'.

Page 679/2683 | < Previous Page | 675 676 677 678 679 680 681 682 683 684 685 686  | Next Page >

  • Use BGInfo to Build a Database of System Information of Your Network Computers

    - by Sysadmin Geek
    One of the more popular tools of the Sysinternals suite among system administrators is BGInfo which tacks real-time system information to your desktop wallpaper when you first login. For obvious reasons, having information such as system memory, available hard drive space and system up time (among others) right in front of you is very convenient when you are managing several systems. A little known feature about this handy utility is the ability to have system information automatically saved to a SQL database or some other data file. With a few minutes of setup work you can easily configure BGInfo to record system information of all your network computers in a centralized storage location. You can then use this data to monitor or report on these systems however you see fit. BGInfo Setup If you are familiar with BGInfo, you can skip this section. However, if you have never used this tool, it takes just a few minutes to setup in order to capture the data you are looking for. When you first open BGInfo, a timer will be counting down in the upper right corner. Click the countdown button to keep the interface up so we can edit the settings. Now edit the information you want to capture from the available fields on the right. Since all the output will be redirected to a central location, don’t worry about configuring the layout or formatting. Configuring the Storage Database BGInfo supports the ability to store information in several database formats: SQL Server Database, Access Database, Excel and Text File. To configure this option, open File > Database. Using a Text File The simplest, and perhaps most practical, option is to store the BGInfo data in a comma separated text file. This format allows for the file to be opened in Excel or imported into a database. To use a text file or any other file system type (Excel or MS Access), simply provide the UNC to the respective file. The account running the task to write to this file will need read/write access to both the share and NTFS file permissions. When using a text file, the only option is to have BGInfo create a new entry each time the capture process is run which will add a new line to the respective CSV text file. Using a SQL Database If you prefer to have the data dropped straight into a SQL Server database, BGInfo support this as well. This requires a bit of additional configuration, but overall it is very easy. The first step is to create a database where the information will be stored. Additionally, you will want to create a user account to fill data into this table (and this table only). For your convenience, this script creates a new database and user account (run this as Administrator on your SQL Server machine): @SET Server=%ComputerName%.@SET Database=BGInfo@SET UserName=BGInfo@SET Password=passwordSQLCMD -S “%Server%” -E -Q “Create Database [%Database%]“SQLCMD -S “%Server%” -E -Q “Create Login [%UserName%] With Password=N’%Password%’, DEFAULT_DATABASE=[%Database%], CHECK_EXPIRATION=OFF, CHECK_POLICY=OFF”SQLCMD -S “%Server%” -E -d “%Database%” -Q “Create User [%UserName%] For Login [%UserName%]“SQLCMD -S “%Server%” -E -d “%Database%” -Q “EXEC sp_addrolemember N’db_owner’, N’%UserName%’” Note the SQL user account must have ‘db_owner’ permissions on the database in order for BGInfo to work correctly. This is why you should have a SQL user account specifically for this database. Next, configure BGInfo to connect to this database by clicking on the SQL button. Fill out the connection properties according to your database settings. Select the option of whether or not to only have one entry per computer or keep a history of each system. The data will then be dropped directly into a table named “BGInfoTable” in the respective database.   Configure User Desktop Options While the primary function of BGInfo is to alter the user’s desktop by adding system info as part of the wallpaper, for our use here we want to leave the user’s wallpaper alone so this process runs without altering any of the user’s settings. Click the Desktops button. Configure the Wallpaper modifications to not alter anything.   Preparing the Deployment Now we are all set for deploying the configuration to the individual machines so we can start capturing the system data. If you have not done so already, click the Apply button to create the first entry in your data repository. If all is configured correctly, you should be able to open your data file or database and see the entry for the respective machine. Now click the File > Save As menu option and save the configuration as “BGInfoCapture.bgi”.   Deploying to Client Machines Deployment to the respective client machines is pretty straightforward. No installation is required as you just need to copy the BGInfo.exe and the BGInfoCapture.bgi to each machine and place them in the same directory. Once in place, just run the command: BGInfo.exe BGInfoCapture.bgi /Timer:0 /Silent /NoLicPrompt Of course, you probably want to schedule the capture process to run on a schedule. This command creates a Scheduled Task to run the capture process at 8 AM every morning and assumes you copied the required files to the root of your C drive: SCHTASKS /Create /SC DAILY /ST 08:00 /TN “System Info” /TR “C:\BGInfo.exe C:\BGInfoCapture.bgi /Timer:0 /Silent /NoLicPrompt” Adjust as needed, but the end result is the scheduled task command should look something like this:   Download BGInfo from Sysinternals Latest Features How-To Geek ETC How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop How Do You Block Annoying Text Message (SMS) Spam? How to Use and Master the Notoriously Difficult Pen Tool in Photoshop HTG Explains: What Are the Differences Between All Those Audio Formats? How To Use Layer Masks and Vector Masks to Remove Complex Backgrounds in Photoshop Bring Summer Back to Your Desktop with the LandscapeTheme for Chrome and Iron The Prospector – Home Dash Extension Creates a Whole New Browsing Experience in Firefox KinEmote Links Kinect to Windows Why Nobody Reads Web Site Privacy Policies [Infographic] Asian Temple in the Snow Wallpaper 10 Weird Gaming Records from the Guinness Book

    Read the article

  • The Shift: how Orchard painlessly shifted to document storage, and how it’ll affect you

    - by Bertrand Le Roy
    We’ve known it all along. The storage for Orchard content items would be much more efficient using a document database than a relational one. Orchard content items are composed of parts that serialize naturally into infoset kinds of documents. Storing them as relational data like we’ve done so far was unnatural and requires the data for a single item to span multiple tables, related through 1-1 relationships. This means lots of joins in queries, and a great potential for Select N+1 problems. Document databases, unfortunately, are still a tough sell in many places that prefer the more familiar relational model. Being able to x-copy Orchard to hosters has also been a basic constraint in the design of Orchard. Combine those with the necessity at the time to run in medium trust, and with license compatibility issues, and you’ll find yourself with very few reasonable choices. So we went, a little reluctantly, for relational SQL stores, with the dream of one day transitioning to document storage. We have played for a while with the idea of building our own document storage on top of SQL databases, and Sébastien implemented something more than decent along those lines, but we had a better way all along that we didn’t notice until recently… In Orchard, there are fields, which are named properties that you can add dynamically to a content part. Because they are so dynamic, we have been storing them as XML into a column on the main content item table. This infoset storage and its associated API are fairly generic, but were only used for fields. The breakthrough was when Sébastien realized how this existing storage could give us the advantages of document storage with minimal changes, while continuing to use relational databases as the substrate. public bool CommercialPrices { get { return this.Retrieve(p => p.CommercialPrices); } set { this.Store(p => p.CommercialPrices, value); } } This code is very compact and efficient because the API can infer from the expression what the type and name of the property are. It is then able to do the proper conversions for you. For this code to work in a content part, there is no need for a record at all. This is particularly nice for site settings: one query on one table and you get everything you need. This shows how the existing infoset solves the data storage problem, but you still need to query. Well, for those properties that need to be filtered and sorted on, you can still use the current record-based relational system. This of course continues to work. We do however provide APIs that make it trivial to store into both record properties and the infoset storage in one operation: public double Price { get { return Retrieve(r => r.Price); } set { Store(r => r.Price, value); } } This code looks strikingly similar to the non-record case above. The difference is that it will manage both the infoset and the record-based storages. The call to the Store method will send the data in both places, keeping them in sync. The call to the Retrieve method does something even cooler: if the property you’re looking for exists in the infoset, it will return it, but if it doesn’t, it will automatically look into the record for it. And if that wasn’t cool enough, it will take that value from the record and store it into the infoset for the next time it’s required. This means that your data will start automagically migrating to infoset storage just by virtue of using the code above instead of the usual: public double Price { get { return Record.Price; } set { Record.Price = value; } } As your users browse the site, it will get faster and faster as Select N+1 issues will optimize themselves away. If you preferred, you could still have explicit migration code, but it really shouldn’t be necessary most of the time. If you do already have code using QueryHints to mitigate Select N+1 issues, you might want to reconsider those, as with the new system, you’ll want to avoid joins that you don’t need for filtering or sorting, further optimizing your queries. There are some rare cases where the storage of the property must be handled differently. Check out this string[] property on SearchSettingsPart for example: public string[] SearchedFields { get { return (Retrieve<string>("SearchedFields") ?? "") .Split(new[] {',', ' '}, StringSplitOptions.RemoveEmptyEntries); } set { Store("SearchedFields", String.Join(", ", value)); } } The array of strings is transformed by the property accessors into and from a comma-separated list stored in a string. The Retrieve and Store overloads used in this case are lower-level versions that explicitly specify the type and name of the attribute to retrieve or store. You may be wondering what this means for code or operations that look directly at the database tables instead of going through the new infoset APIs. Even if there is a record, the infoset version of the property will win if it exists, so it is necessary to keep the infoset up-to-date. It’s not very complicated, but definitely something to keep in mind. Here is what a product record looks like in Nwazet.Commerce for example: And here is the same data in the infoset: The infoset is stored in Orchard_Framework_ContentItemRecord or Orchard_Framework_ContentItemVersionRecord, depending on whether the content type is versionable or not. A good way to find what you’re looking for is to inspect the record table first, as it’s usually easier to read, and then get the item record of the same id. Here is the detailed XML document for this product: <Data> <ProductPart Inventory="40" Price="18" Sku="pi-camera-box" OutOfStockMessage="" AllowBackOrder="false" Weight="0.2" Size="" ShippingCost="null" IsDigital="false" /> <ProductAttributesPart Attributes="" /> <AutoroutePart DisplayAlias="camera-box" /> <TitlePart Title="Nwazet Pi Camera Box" /> <BodyPart Text="[...]" /> <CommonPart CreatedUtc="2013-09-10T00:39:00Z" PublishedUtc="2013-09-14T01:07:47Z" /> </Data> The data is neatly organized under each part. It is easy to see how that document is all you need to know about that content item, all in one table. If you want to modify that data directly in the database, you should be careful to do it in both the record table and the infoset in the content item record. In this configuration, the record is now nothing more than an index, and will only be used for sorting and filtering. Of course, it’s perfectly fine to mix record-backed properties and record-less properties on the same part. It really depends what you think must be sorted and filtered on. In turn, this potentially simplifies migrations considerably. So here it is, the great shift of Orchard to document storage, something that Orchard has been designed for all along, and that we were able to implement with a satisfying and surprising economy of resources. Expect this code to make its way into the 1.8 version of Orchard when that’s available.

    Read the article

  • Is Financial Inclusion an Obligation or an Opportunity for Banks?

    - by tushar.chitra
    Why should banks care about financial inclusion? First, the statistics, I think this will set the tone for this blog post. There are close to 2.5 billion people who are excluded from the banking stream and out of this, 2.2 billion people are from the continents of Africa, Latin America and Asia (McKinsey on Society: Global Financial Inclusion). However, this is not just a third-world phenomenon. According to Federal Deposit Insurance Corp (FDIC), in the US, post 2008 financial crisis, one family out of five has either opted out of the banking system or has been moved out (American Banker). Moving this huge unbanked population into mainstream banking is both an opportunity and a challenge for banks. An obvious opportunity is the significant untapped customer base that banks can target, so is the positive brand equity a bank can build by fulfilling its social responsibilities. Also, as banks target the cost-conscious unbanked customer, they will be forced to look at ways to offer cost-effective products and services, necessitating technology upgrades and innovations. However, cost is not the only hurdle in increasing the adoption of banking services. The potential users need to be convinced of the benefits of banking and banks will also face stiff competition from unorganized players. Finally, the banks will have to believe in the viability of this business opportunity, and not treat financial inclusion as an obligation. In what ways can banks target the unbanked For financial inclusion to be a success, banks should adopt innovative business models to develop products that address the stated and unstated needs of the unbanked population and also design delivery channels that are cost effective and viable in the long run. Through business correspondents and facilitators In rural and remote areas, one of the major hurdles in increasing banking penetration is connectivity and accessibility to banking services, which makes last mile inclusion a daunting challenge. To address this, banks can avail the services of business correspondents or facilitators. This model allows banks to establish greater connectivity through a trusted and reliable intermediary. In India, for instance, banks can leverage the local Kirana stores (the mom & pop stores) to service rural and remote areas. With a supportive nudge from the central bank, the commercial banks can enlist these shop owners as business correspondents to increase their reach. Since these neighborhood stores are acquainted with the local population, they can help banks manage the KYC norms, besides serving as a conduit for remittance. Banks also have an opportunity over a period of time to cross-sell other financial products such as micro insurance, mutual funds and pension products through these correspondents. To exercise greater operational control over the business correspondents, banks can also adopt a combination of branch and business correspondent models to deliver financial inclusion. Through mobile devices According to a 2012 world bank report on financial inclusion, out of a world population of 7 billion, over 5 billion or 70% have mobile phones and only 2 billion or 30% have a bank account. What this means for banks is that there is scope for them to leverage this phenomenal growth in mobile usage to serve the unbanked population. Banks can use mobile technology to service the basic banking requirements of their customers with no frills accounts, effectively bringing down the cost per transaction. As I had discussed in my earlier post on mobile payments, though non-traditional players have taken the lead in P2P mobile payments, banks still hold an edge in terms of infrastructure and reliability. Through crowd-funding According to the Crowdfunding Industry Report by Massolution, the global crowdfunding industry raised $2.7 billion in 2012, and is projected to grow to $5.1 billion in 2013. With credit policies becoming tighter and banks becoming more circumspect in terms of loan disbursals, crowdfunding has emerged as an alternative channel for lending. Typically, these initiatives target the unbanked population by offering small loans that are unviable for larger banks. Though a significant proportion of crowdfunding initiatives globally are run by non-banking institutions, banks are also venturing into this space. The next step towards inclusive finance Banks by themselves cannot make financial inclusion a success. There is a need for a whole ecosystem that is supportive of this mission. The policy makers, that include the regulators and government bodies, must be in sync, the IT solution providers must put on their thinking caps to come out with innovative products and solutions, communication channels such as internet and mobile need to expand their reach, and the media and the public need to play an active part. The other challenge for financial inclusion is from the banks themselves. While it is true that financial inclusion will unleash a hitherto hugely untapped market, the normal banking model may be found wanting because of issues such as flexibility, convenience and reliability. The business will be viable only when there is a focus on increasing the usage of existing infrastructure and that is possible when the banks can offer the entire range of products and services to the large number of users of essential banking services. Apart from these challenges, banks will also have to quickly master and replicate the business model to extend their reach to the remotest regions in their respective geographies. They will need to ensure that the transactions deliver a viable business benefit to the bank. For tapping cross-sell opportunities, banks will have to quickly roll-out customized and segment-specific products. The bank staff should be brought in sync with the business plan by convincing them of the viability of the business model and the need for a business correspondent delivery model. Banks, in collaboration with the government and NGOs, will have to run an extensive financial literacy program to educate the unbanked about the benefits of banking. Finally, with the growing importance of retail banking and with many unconventional players eyeing the opportunity in payments and other lucrative areas of banking, banks need to understand the importance of micro and small branches. These micro and small branches can help banks increase their presence without a huge cost burden, provide bankers an opportunity to cross sell micro products and offer a window of opportunity for the large non-banked population to transact without any interference from intermediaries. These branches can also help diminish the role of the unorganized financial sector, such as local moneylenders and unregistered credit societies. This will also help banks build a brand awareness and loyalty among the users, which by itself has a cascading effect on the business operations, especially among the rural and un-banked centers. In conclusion, with the increasingly competitive banking sector facing frequent slowdowns and downturns, the unbanked population presents a huge opportunity for banks to enhance their customer base and fulfill their social responsibility.

    Read the article

  • How to use Azure storage for uploading and displaying pictures.

    - by Magnus Karlsson
    Basic set up of Azure storage for local development and production. This is a somewhat completion of the following guide from http://www.windowsazure.com/en-us/develop/net/how-to-guides/blob-storage/ that also involves a practical example that I believe is commonly used, i.e. upload and present an image from a user.   First we set up for local storage and then we configure for them to work on a web role. Steps: 1. Configure connection string locally. 2. Configure model, controllers and razor views.   1. Setup connectionsstring 1.1 Right click your web role and choose “Properties”. 1.2 Click Settings. 1.3 Add setting. 1.4 Name your setting. This will be the name of the connectionstring. 1.5 Click the ellipsis to the right. (the ellipsis appear when you mark the area. 1.6 The following window appears- Select “Windows Azure storage emulator” and click ok.   Now we have a connection string to use. To be able to use it we need to make sure we have windows azure tools for storage. 2.1 Click Tools –> Library Package manager –> Manage Nuget packages for solution. 2.2 This is what it looks like after it has been added.   Now on to what the code should look like. 3.1 First we need a view which collects images to upload. Here Index.cshtml. 1: @model List<string> 2:  3: @{ 4: ViewBag.Title = "Index"; 5: } 6:  7: <h2>Index</h2> 8: <form action="@Url.Action("Upload")" method="post" enctype="multipart/form-data"> 9:  10: <label for="file">Filename:</label> 11: <input type="file" name="file" id="file1" /> 12: <br /> 13: <label for="file">Filename:</label> 14: <input type="file" name="file" id="file2" /> 15: <br /> 16: <label for="file">Filename:</label> 17: <input type="file" name="file" id="file3" /> 18: <br /> 19: <label for="file">Filename:</label> 20: <input type="file" name="file" id="file4" /> 21: <br /> 22: <input type="submit" value="Submit" /> 23: 24: </form> 25:  26: @foreach (var item in Model) { 27:  28: <img src="@item" alt="Alternate text"/> 29: } 3.2 We need a controller to receive the post. Notice the “containername” string I send to the blobhandler. I use this as a folder for the pictures for each user. If this is not a requirement you could just call it container or anything with small characters directly when creating the container. 1: public ActionResult Upload(IEnumerable<HttpPostedFileBase> file) 2: { 3: BlobHandler bh = new BlobHandler("containername"); 4: bh.Upload(file); 5: var blobUris=bh.GetBlobs(); 6: 7: return RedirectToAction("Index",blobUris); 8: } 3.3 The handler model. I’ll let the comments speak for themselves. 1: public class BlobHandler 2: { 3: // Retrieve storage account from connection string. 4: CloudStorageAccount storageAccount = CloudStorageAccount.Parse( 5: CloudConfigurationManager.GetSetting("StorageConnectionString")); 6: 7: private string imageDirecoryUrl; 8: 9: /// <summary> 10: /// Receives the users Id for where the pictures are and creates 11: /// a blob storage with that name if it does not exist. 12: /// </summary> 13: /// <param name="imageDirecoryUrl"></param> 14: public BlobHandler(string imageDirecoryUrl) 15: { 16: this.imageDirecoryUrl = imageDirecoryUrl; 17: // Create the blob client. 18: CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); 19: 20: // Retrieve a reference to a container. 21: CloudBlobContainer container = blobClient.GetContainerReference(imageDirecoryUrl); 22: 23: // Create the container if it doesn't already exist. 24: container.CreateIfNotExists(); 25: 26: //Make available to everyone 27: container.SetPermissions( 28: new BlobContainerPermissions 29: { 30: PublicAccess = BlobContainerPublicAccessType.Blob 31: }); 32: } 33: 34: public void Upload(IEnumerable<HttpPostedFileBase> file) 35: { 36: // Create the blob client. 37: CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); 38: 39: // Retrieve a reference to a container. 40: CloudBlobContainer container = blobClient.GetContainerReference(imageDirecoryUrl); 41: 42: if (file != null) 43: { 44: foreach (var f in file) 45: { 46: if (f != null) 47: { 48: CloudBlockBlob blockBlob = container.GetBlockBlobReference(f.FileName); 49: blockBlob.UploadFromStream(f.InputStream); 50: } 51: } 52: } 53: } 54: 55: public List<string> GetBlobs() 56: { 57: // Create the blob client. 58: CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); 59: 60: // Retrieve reference to a previously created container. 61: CloudBlobContainer container = blobClient.GetContainerReference(imageDirecoryUrl); 62: 63: List<string> blobs = new List<string>(); 64: 65: // Loop over blobs within the container and output the URI to each of them 66: foreach (var blobItem in container.ListBlobs()) 67: blobs.Add(blobItem.Uri.ToString()); 68: 69: return blobs; 70: } 71: } 3.4 So, when the files have been uploaded we will get them to present them to out user in the index page. Pretty straight forward. In this example we only present the image by sending the Uri’s to the view. A better way would be to save them up in a view model containing URI, metadata, alternate text, and other relevant information but for this example this is all we need.   4. Now press F5 in your solution to try it out. You can see the storage emulator UI here:     4.1 If you get any exceptions or errors I suggest to first check if the service Is running correctly. I had problem with this and they seemed related to the installation and a reboot fixed my problems.     5. Set up for Cloud storage. To do this we need to add configuration for cloud just as we did for local in step one. 5.1 We need our keys to do this. Go to the windows Azure menagement portal, select storage icon to the right and click “Manage keys”. (Image from a different blog post though).   5.2 Do as in step 1.but replace step 1.6 with: 1.6 Choose “Manually entered credentials”. Enter your account name. 1.7 Paste your Account Key from step 5.1. and click ok.   5.3. Save, publish and run! Please feel free to ask any questions using the comments form at the bottom of this page. I will get back to you to help you solve any questions. Our consultancy agency also provides services in the Nordic regions if you would like any further support.

    Read the article

  • Configuration "diff" across Oracle WebCenter Sites instances

    - by Mark Fincham-Oracle
    Problem Statement With many Oracle WebCenter Sites environments - how do you know if the various configuration assets and settings are in sync across all of those environments? Background At Oracle we typically have a "W" shaped set of environments.  For the "Production" environments we typically have a disaster recovery clone as well and sometimes additional QA environments alongside the production management environment. In the case of www.java.com we have 10 different environments. All configuration assets/settings (CSElements, Templates, Start Menus etc..) start life on the Development Management environment and are then published downstream to other environments as part of the software development lifecycle. Ensuring that each of these 10 environments has the same set of Templates, CSElements, StartMenus, TreeTabs etc.. is impossible to do efficiently without automation. Solution Summary  The solution comprises of two components. A JSON data feed from each environment. A simple HTML page that consumes these JSON data feeds.  Data Feed: Create a JSON WebService on each environment. The WebService is no more than a SiteEntry + CSElement. The CSElement queries various DB tables to obtain details of the assets/settings returning this data in a JSON feed. Report: Create a simple HTML page that uses JQuery to fetch the JSON feed from each environment and display the results in a table. Since all assets (CSElements, Templates etc..) are published between environments they will have the same last modified date. If the last modified date of an asset is different in the JSON feed or is mising from an environment entirely then highlight that in the report table. Example Solution Details Step 1: Create a Site Entry + CSElement that outputs JSON Site Entry & CSElement Setup  The SiteEntry should be uncached so that the most recent configuration information is returned at all times. In the CSElement set the contenttype accordingly: Step 2: Write the CSElement Logic The basic logic, that we repeat for each asset or setting that we are interested in, is to query the DB using <ics:sql> and then loop over the resultset with <ics:listloop>. For example: <ics:sql sql="SELECT name,updateddate FROM Template WHERE status != 'VO'" listname="TemplateList" table="Template" /> "templates": [ <ics:listloop listname="TemplateList"> {"name":"<ics:listget listname="TemplateList"  fieldname="name"/>", "modified":"<ics:listget listname="TemplateList"  fieldname="updateddate"/>"}, </ics:listloop> ], A comprehensive list of SQL queries to fetch each configuration asset/settings can be seen in the appendix at the end of this article. For the generation of the JSON data structure you could use Jettison (the library ships with the 11.1.1.8 version of the product), native Java 7 capabilities or (as the above example demonstrates) you could roll-your-own JSON output but that is not advised. Step 3: Create an HTML Report The JavaScript logic looks something like this.. 1) Create a list of JSON feeds to fetch: ENVS['dev-mgmngt'] = 'http://dev-mngmnt.example.com/sites/ContentServer?d=&pagename=settings.json'; ENVS['dev-dlvry'] = 'http://dev-dlvry.example.com/sites/ContentServer?d=&pagename=settings.json';  ENVS['test-mngmnt'] = 'http://test-mngmnt.example.com/sites/ContentServer?d=&pagename=settings.json';  ENVS['test-dlvry'] = 'http://test-dlvry.example.com/sites/ContentServer?d=&pagename=settings.json';   2) Create a function to get the JSON feeds: function getDataForEnvironment(url){ return $.ajax({ type: 'GET', url: url, dataType: 'jsonp', beforeSend: function (jqXHR, settings){ jqXHR.originalEnv = env; jqXHR.originalUrl = url; }, success: function(json, status, jqXHR) { console.log('....success fetching: ' + jqXHR.originalUrl); // store the returned data in ALLDATA ALLDATA[jqXHR.originalEnv] = json; }, error: function(jqXHR, status, e) { console.log('....ERROR: Failed to get data from [' + url + '] ' + status + ' ' + e); } }); } 3) Fetch each JSON feed: for (var env in ENVS) { console.log('Fetching data for env [' + env +'].'); var promisedData = getDataForEnvironment(ENVS[env]); promisedData.success(function (data) {}); }  4) For each configuration asset or setting create a table in the report For example, CSElements: 1) Get a list of unique CSElement names from all of the returned JSON data. 2) For each unique CSElement name, create a row in the table  3) Select 1 environment to represent the master or ideal state (e.g. "Everything should be like Production Delivery") 4) For each environment, compare the last modified date of this envs CSElement to the master. Highlight any differences in last modified date or missing CSElements. 5) Repeat...    Appendix This section contains various SQL statements that can be used to retrieve configuration settings from the DB.  Templates  <ics:sql sql="SELECT name,updateddate FROM Template WHERE status != 'VO'" listname="TemplateList" table="Template" /> CSElements <ics:sql sql="SELECT name,updateddate FROM CSElement WHERE status != 'VO'" listname="CSEList" table="CSElement" /> Start Menus <ics:sql sql="select sm.id, sm.cs_name, sm.cs_description, sm.cs_assettype, sm.cs_assetsubtype, sm.cs_itemtype, smr.cs_rolename, p.name from StartMenu sm, StartMenu_Sites sms, StartMenu_Roles smr, Publication p where sm.id=sms.ownerid and sm.id=smr.cs_ownerid and sms.pubid=p.id order by sm.id" listname="startList" table="Publication,StartMenu,StartMenu_Roles,StartMenu_Sites"/>  Publishing Configurations <ics:sql sql="select id, name, description, type, dest, factors from PubTarget" listname="pubTargetList" table="PubTarget" /> Tree Tabs <ics:sql sql="select tt.id, tt.title, tt.tooltip, p.name as pubname, ttr.cs_rolename, ttsect.name as sectname from TreeTabs tt, TreeTabs_Roles ttr, TreeTabs_Sect ttsect,TreeTabs_Sites ttsites LEFT JOIN Publication p  on p.id=ttsites.pubid where p.id is not null and tt.id=ttsites.ownerid and ttsites.pubid=p.id and tt.id=ttr.cs_ownerid and tt.id=ttsect.ownerid order by tt.id" listname="treeTabList" table="TreeTabs,TreeTabs_Roles,TreeTabs_Sect,TreeTabs_Sites,Publication" />  Filters <ics:sql sql="select name,description,classname from Filters" listname="filtersList" table="Filters" /> Attribute Types <ics:sql sql="select id,valuetype,name,updateddate from AttrTypes where status != 'VO'" listname="AttrList" table="AttrTypes" /> WebReference Patterns <ics:sql sql="select id,webroot,pattern,assettype,name,params,publication from WebReferencesPatterns" listname="WebRefList" table="WebReferencesPatterns" /> Device Groups <ics:sql sql="select id,devicegroupsuffix,updateddate,name from DeviceGroup" listname="DeviceList" table="DeviceGroup" /> Site Entries <ics:sql sql="select se.id,se.name,se.pagename,se.cselement_id,se.updateddate,cse.rootelement from SiteEntry se LEFT JOIN CSElement cse on cse.id = se.cselement_id where se.status != 'VO'" listname="SiteList" table="SiteEntry,CSElement" /> Webroots <ics:sql sql="select id,name,rooturl,updatedby,updateddate from WebRoot" listname="webrootList" table="WebRoot" /> Page Definitions <ics:sql sql="select pd.id, pd.name, pd.updatedby, pd.updateddate, pd.description, pdt.attributeid, pa.name as nameattr, pdt.requiredflag, pdt.ordinal from PageDefinition pd, PageDefinition_TAttr pdt, PageAttribute pa where pd.status != 'VO' and pa.id=pdt.attributeid and pdt.ownerid=pd.id order by pd.id,pdt.ordinal" listname="pageDefList" table="PageDefinition,PageAttribute,PageDefinition_TAttr" /> FW_Application <ics:sql sql="select id,name,updateddate from FW_Application where status != 'VO'" listname="FWList" table="FW_Application" /> Custom Elements <ics:sql sql="select elementname from ElementCatalog where elementname like 'CustomElements%'" listname="elementList" table="ElementCatalog" />

    Read the article

  • Advanced Continuous Delivery to Azure from TFS, Part 1: Good Enough Is Not Great

    - by jasont
    The folks over on the TFS / Visual Studio team have been working hard at releasing a steady stream of new features for their new hosted Team Foundation Service in the cloud. One of the most significant features released was simple continuous delivery of your solution into your Azure deployments. The original announcement from Brian Harry can be found here. Team Foundation Service is a great platform for .Net developers who are used to working with TFS on-premises. I’ve been using it since it became available at the //BUILD conference in 2011, and when I recently came to work at Stackify, it was one of the first changes I made. Managing work items is much easier than the tool we were using previously, although there are some limitations (more on that in another blog post). However, when continuous deployment was made available, it blew my mind. It was the killer feature I didn’t know I needed. Not to say that I wasn’t previously an advocate for continuous delivery; just that it was always a pain to set up and configure. Having it hosted - and a one-click setup – well, that’s just the best thing since sliced bread. It made perfect sense: my source code is in the cloud, and my deployment is in the cloud. Great! I can queue up a build from my iPad or phone and just let it go! I quickly tore through the quick setup and saw it all work… sort of. This will be the first in a three part series on how to take the building block of Team Foundation Service continuous delivery and build a CD model that will actually work for any team deploying something more advanced than a “Hello World” example. Part 1: Good Enough Is Not Great Part 2: A Model That Works: Branching and Multiple Deployment Environments Part 3: Other Considerations: SQL, Custom Tasks, Etc Good Enough Is Not Great There. I’ve said it. I certainly hope no one on the TFS team is offended, but it’s the truth. Let’s take a look under the hood and understand how it works, and then why it’s not enough to handle real world CD as-is. How it works. (note that I’ve skipped a couple of steps; I already have my accounts set up and something deployed to Azure) The first step is to establish some oAuth magic between your Azure management portal and your TFS Instance. You do this via the management portal. Once it’s done, you have a new build process template in your TFS instance. (Image lifted from the documentation) From here, you’ll get the usual prompts for security, allowing access, etc. But you’ll also get to pick which Solution in your source control to build. Here’s what the bulk of the build definition looks like. All I’ve had to do is add in the solution to build (notice that mine is from a specific branch – Release – more on that later) and I’ve changed the configuration. I trigger the build, and voila! I have an Azure deployment a few minutes later. The beauty of this is that it’s all in the cloud and I’m not waiting for my machine to compile and upload the package. (I also had to enable the build definition first – by default it is created in disabled state, probably a good thing since it will trigger on every.single.checkin by default.) I get to see a history of deployments from the Azure portal, and can link into TFS to see the associated changesets and work items. You’ll notice also that this build definition also automatically put my code in the Staging slot of my Azure deployment – more on this soon. For now, I can VIP swap and be in production. (P.S. I hate VIP swap and “production” and “staging” in Azure. More on that later too.) That’s it. That’s the default out-of-box experience. Easy, right? But it’s full of room for improvement, so let’s get into that….   The Problems Nothing is perfect (except my code – it’s always perfect), and neither is Continuous Deployment without a bit of work to help it fit your dev team’s process. So what are the issues? Issue 1: Staging vs QA vs Prod vs whatever other environments your team may have. This, for me, is the big hairy one. Remember how this automatically deployed to staging rather than prod for us? There are a couple of issues with this model: If I want to deliver to prod, it requires intervention on my part after deployment (via a VIP swap). If I truly want to promote between environments (i.e. Nightly Build –> Stable QA –> Production) I likely have configuration changes between each environment such as database connection strings and this process (and the VIP swap) doesn’t account for this. Yet. Issue 2: Branching and delivering on every check-in. As I mentioned above, I have set this up to target a specific branch – Release – of my code. For the purposes of this example, I have adopted the “basic” branching strategy as defined by the ALM Rangers. This basically establishes a “Main” trunk where you branch off Dev and Release branches. Granted, the Release branch is usually the only thing you will deploy to production, but you certainly don’t want to roll to production automatically when you merge to the Release branch and check-in (unless you like the thrill of it, and in that case, I like your style, cowboy….). Rather, you have nightly build and QA environments, or if you’ve adopted the feature-branch model you have environments for those. Those are the environments you want to continuously deploy to. But that takes us back to Issue 1: we currently have a 1:1 solution to Azure deployment target. Issue 3: SQL and other custom tasks. Let’s be honest and address the elephant in the room: I need to get some sleep because I see an elephant in the room. But seriously, I can’t think of an application I have touched in the last 10 years that doesn’t need to consider SQL changes when deploying code and upgrading an environment. Microsoft seems perfectly content to ignore this elephant for now: yes, they’ve added Data Tier Applications. But let’s be honest with ourselves again: no one really uses it, and it’s not suitable for anything more complex than a Hello World sample project database. Why? Because it doesn’t fit well into a great source control story. Developers make stored procedure and table changes all day long while coding complex applications, and if someone forgets to go update the DACPAC before the automated deployment, you have a broken build until it’s completed. Developers – not just DBAs – also like to work with SQL in SQL tools, not in Visual Studio. I’m really picking on SQL because that’s generally the biggest concern that I hear. But we need to account for any custom tasks as well in the build process.   The Solutions… ? We’ve taken a look at how this all works, and addressed the shortcomings. In my next post (which I promise will be very, very soon), I will detail how I’ve overcome these shortcomings and used this foundation to create a mature, flexible model for deploying my app – any version, any time, to any environment.

    Read the article

  • SQL University: What and why of database testing

    - by Mladen Prajdic
    This is a post for a great idea called SQL University started by Jorge Segarra also famously known as SqlChicken on Twitter. It’s a collection of blog posts on different database related topics contributed by several smart people all over the world. So this week is mine and we’ll be talking about database testing and refactoring. In 3 posts we’ll cover: SQLU part 1 - What and why of database testing SQLU part 2 - What and why of database refactoring SQLU part 2 – Tools of the trade With that out of the way let us sharpen our pencils and get going. Why test a database The sad state of the industry today is that there is very little emphasis on testing in general. Test driven development is still a small niche of the programming world while refactoring is even smaller. The cause of this is the inability of developers to convince themselves and their managers that writing tests is beneficial. At the moment they are mostly viewed as waste of time. This is because the average person (let’s not fool ourselves, we’re all average) is unable to think about lower future costs in relation to little more current work. It’s orders of magnitude easier to know about the current costs in relation to current amount of work. That’s why programmers convince themselves testing is a waste of time. However we have to ask ourselves what tests are really about? Maybe finding bugs? No, not really. If we introduce bugs, we’re likely to write test around those bugs too. But yes we can find some bugs with tests. The main point of tests is to have reproducible repeatability in our systems. By having a code base largely covered by tests we can know with better certainty what a small code change can break in other parts of the system. By having repeatability we can make code changes with confidence, since we know we’ll see what breaks in other tests. And here comes the inability to estimate future costs. By spending just a few more hours writing those tests we’d know instantly what broke where. Imagine we fix a reported bug. We check-in the code, deploy it and the users are happy. Until we get a call 2 weeks later about a certain monthly process has stopped working. What we don’t know is that this process was developed by a long gone coworker and for some reason it relied on that same bug we’ve happily fixed. There’s no way we could’ve known that. We say OK and go in and fix the monthly process. But what we have no clue about is that there’s this ETL job that relied on data from that monthly process. Now that we’ve fixed the process it’s giving unexpected (yet correct since we fixed it) data to the ETL job. So we have to fix that too. But there’s this part of the app we coded that relies on data from that exact ETL job. And just like that we enter the “Loop of maintenance horror”. With the loop eventually comes blame. Here’s a nice tip for all developers and DBAs out there: If you make a mistake man up and admit to it. All of the above is valid for any kind of software development. Keeping this in mind the database is nothing other than just a part of the application. But a big part! One reason why testing a database is even more important than testing an application is that one database is usually accessed from multiple applications and processes. This makes it the central and vital part of the enterprise software infrastructure. Knowing all this can we really afford not to have tests? What to test in a database Now that we’ve decided we’ll dive into this testing thing we have to ask ourselves what needs to be tested? The short answer is: everything. The long answer is: read on! There are 2 main ways of doing tests: Black box and White box testing. Black box testing means we have no idea how the system internals are built and we only have access to it’s inputs and outputs. With it we test that the internal changes to the system haven’t caused the input/output behavior of the system to change. The most important thing to test here are the edge conditions. It’s where most programs break. Having good edge condition tests we can be more confident that the systems changes won’t break. White box testing has the full knowledge of the system internals. With it we test the internal system changes, different states of the application, etc… White and Black box tests should be complementary to each other as they are very much interconnected. Testing database routines includes testing stored procedures, views, user defined functions and anything you use to access the data with. Database routines are your input/output interface to the database system. They count as black box testing. We test then for 2 things: Data and schema. When testing schema we only care about the columns and the data types they’re returning. After all the schema is the contract to the out side systems. If it changes we usually have to change the applications accessing it. One helpful T-SQL command when doing schema tests is SET FMTONLY ON. It tells the SQL Server to return only empty results sets. This speeds up tests because it doesn’t return any data to the client. After we’ve validated the schema we have to test the returned data. There no other way to do this but to have expected data known before the tests executes and comparing that data to the database routine output. Testing Authentication and Authorization helps us validate who has access to the SQL Server box (Authentication) and who has access to certain database objects (Authorization). For desktop applications and windows authentication this works well. But the biggest problem here are web apps. They usually connect to the database as a single user. Please ensure that that user is not SA or an account with admin privileges. That is just bad. Load testing ensures us that our database can handle peak loads. One often overlooked tool for load testing is Microsoft’s OSTRESS tool. It’s part of RML utilities (x86, x64) for SQL Server and can help determine if our database server can handle loads like 100 simultaneous users each doing 10 requests per second. SQL Profiler can also help us here by looking at why certain queries are slow and what to do to fix them.   One particular problem to think about is how to begin testing existing databases. First thing we have to do is to get to know those databases. We can’t test something when we don’t know how it works. To do this we have to talk to the users of the applications accessing the database, run SQL Profiler to see what queries are being run, use existing documentation to decipher all the object relationships, etc… The way to approach this is to choose one part of the database (say a logical grouping of tables that go together) and filter our traces accordingly. Once we’ve done that we move on to the next grouping and so on until we’ve covered the whole database. Then we move on to the next one. Database Testing is a topic that we can spent many hours discussing but let this be a nice intro to the world of database testing. See you in the next post.

    Read the article

  • tile_static, tile_barrier, and tiled matrix multiplication with C++ AMP

    - by Daniel Moth
    We ended the previous post with a mechanical transformation of the C++ AMP matrix multiplication example to the tiled model and in the process introduced tiled_index and tiled_grid. This is part 2. tile_static memory You all know that in regular CPU code, static variables have the same value regardless of which thread accesses the static variable. This is in contrast with non-static local variables, where each thread has its own copy. Back to C++ AMP, the same rules apply and each thread has its own value for local variables in your lambda, whereas all threads see the same global memory, which is the data they have access to via the array and array_view. In addition, on an accelerator like the GPU, there is a programmable cache, a third kind of memory type if you'd like to think of it that way (some call it shared memory, others call it scratchpad memory). Variables stored in that memory share the same value for every thread in the same tile. So, when you use the tiled model, you can have variables where each thread in the same tile sees the same value for that variable, that threads from other tiles do not. The new storage class for local variables introduced for this purpose is called tile_static. You can only use tile_static in restrict(direct3d) functions, and only when explicitly using the tiled model. What this looks like in code should be no surprise, but here is a snippet to confirm your mental image, using a good old regular C array // each tile of threads has its own copy of locA, // shared among the threads of the tile tile_static float locA[16][16]; Note that tile_static variables are scoped and have the lifetime of the tile, and they cannot have constructors or destructors. tile_barrier In amp.h one of the types introduced is tile_barrier. You cannot construct this object yourself (although if you had one, you could use a copy constructor to create another one). So how do you get one of these? You get it, from a tiled_index object. Beyond the 4 properties returning index objects, tiled_index has another property, barrier, that returns a tile_barrier object. The tile_barrier class exposes a single member, the method wait. 15: // Given a tiled_index object named t_idx 16: t_idx.barrier.wait(); 17: // more code …in the code above, all threads in the tile will reach line 16 before a single one progresses to line 17. Note that all threads must be able to reach the barrier, i.e. if you had branchy code in such a way which meant that there is a chance that not all threads could reach line 16, then the code above would be illegal. Tiled Matrix Multiplication Example – part 2 So now that we added to our understanding the concepts of tile_static and tile_barrier, let me obfuscate rewrite the matrix multiplication code so that it takes advantage of tiling. Before you start reading this, I suggest you get a cup of your favorite non-alcoholic beverage to enjoy while you try to fully understand the code. 01: void MatrixMultiplyTiled(vector<float>& vC, const vector<float>& vA, const vector<float>& vB, int M, int N, int W) 02: { 03: static const int TS = 16; 04: array_view<const float,2> a(M, W, vA); 05: array_view<const float,2> b(W, N, vB); 06: array_view<writeonly<float>,2> c(M,N,vC); 07: parallel_for_each(c.grid.tile< TS, TS >(), 08: [=] (tiled_index< TS, TS> t_idx) restrict(direct3d) 09: { 10: int row = t_idx.local[0]; int col = t_idx.local[1]; 11: float sum = 0.0f; 12: for (int i = 0; i < W; i += TS) { 13: tile_static float locA[TS][TS], locB[TS][TS]; 14: locA[row][col] = a(t_idx.global[0], col + i); 15: locB[row][col] = b(row + i, t_idx.global[1]); 16: t_idx.barrier.wait(); 17: for (int k = 0; k < TS; k++) 18: sum += locA[row][k] * locB[k][col]; 19: t_idx.barrier.wait(); 20: } 21: c[t_idx.global] = sum; 22: }); 23: } Notice that all the code up to line 9 is the same as per the changes we made in part 1 of tiling introduction. If you squint, the body of the lambda itself preserves the original algorithm on lines 10, 11, and 17, 18, and 21. The difference being that those lines use new indexing and the tile_static arrays; the tile_static arrays are declared and initialized on the brand new lines 13-15. On those lines we copy from the global memory represented by the array_view objects (a and b), to the tile_static vanilla arrays (locA and locB) – we are copying enough to fit a tile. Because in the code that follows on line 18 we expect the data for this tile to be in the tile_static storage, we need to synchronize the threads within each tile with a barrier, which we do on line 16 (to avoid accessing uninitialized memory on line 18). We also need to synchronize the threads within a tile on line 19, again to avoid the race between lines 14, 15 (retrieving the next set of data for each tile and overwriting the previous set) and line 18 (not being done processing the previous set of data). Luckily, as part of the awesome C++ AMP debugger in Visual Studio there is an option that helps you find such races, but that is a story for another blog post another time. May I suggest reading the next section, and then coming back to re-read and walk through this code with pen and paper to really grok what is going on, if you haven't already? Cool. Why would I introduce this tiling complexity into my code? Funny you should ask that, I was just about to tell you. There is only one reason we tiled our extent, had to deal with finding a good tile size, ensure the number of threads we schedule are correctly divisible with the tile size, had to use a tiled_index instead of a normal index, and had to understand tile_barrier and to figure out where we need to use it, and double the size of our lambda in terms of lines of code: the reason is to be able to use tile_static memory. Why do we want to use tile_static memory? Because accessing tile_static memory is around 10 times faster than accessing the global memory on an accelerator like the GPU, e.g. in the code above, if you can get 150GB/second accessing data from the array_view a, you can get 1500GB/second accessing the tile_static array locA. And since by definition you are dealing with really large data sets, the savings really pay off. We have seen tiled implementations being twice as fast as their non-tiled counterparts. Now, some algorithms will not have performance benefits from tiling (and in fact may deteriorate), e.g. algorithms that require you to go only once to global memory will not benefit from tiling, since with tiling you already have to fetch the data once from global memory! Other algorithms may benefit, but you may decide that you are happy with your code being 150 times faster than the serial-version you had, and you do not need to invest to make it 250 times faster. Also algorithms with more than 3 dimensions, which C++ AMP supports in the non-tiled model, cannot be tiled. Also note that in future releases, we may invest in making the non-tiled model, which already uses tiling under the covers, go the extra step and use tile_static memory on your behalf, but it is obviously way to early to commit to anything like that, and we certainly don't do any of that today. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • 5 Best Practices - Laying the Foundation for WebCenter Projects

    - by Kellsey Ruppel
    Today’s guest post comes from Oracle WebCenter expert John Brunswick. John specializes in enterprise portal and content management solutions and actively contributes to the enterprise software business community and has authored a series of articles about optimal business involvement in portal, business process management and SOA development, examining ways of helping organizations move away from monolithic application development. We’re happy to have John join us today! Maximizing success with Oracle WebCenter portal requires a strategic understanding of Oracle WebCenter capabilities.  The following best practices enable the creation of portal solutions with minimal resource overhead, while offering the greatest flexibility for progressive elaboration. They are inherently project agnostic, enabling a strong foundation for future growth and an expedient return on your investment in the platform.  If you are able to embrace even only a few of these practices, you will materially improve your deployment capability with WebCenter. 1. Segment Duties Around 3Cs - Content, Collaboration and Contextual Data "Agility" is one of the most common business benefits touted by modern web platforms.  It sounds good - who doesn't want to be Agile, right?  How exactly IT organizations go about supplying agility to their business counterparts often lacks definition - hamstrung by ambiguity. Ultimately, businesses want to benefit from reduced development time to deliver a solution to a particular constituent, which is augmented by as much self-service as possible to develop and manage the solution directly. All done in the absence of direct IT involvement. With Oracle WebCenter's depth in the areas of content management, pallet of native collaborative services, enterprise mashup capability and delegated administration, it is very possible to execute on this business vision at a technical level. To realize the benefits of the platform depth we can think of Oracle WebCenter's segmentation of duties along the lines of the 3 Cs - Content, Collaboration and Contextual Data.  All three of which can have their foundations developed by IT, then provisioned to the business on a per role basis. Content – Oracle WebCenter benefits from an extremely mature content repository.  Work flow, audit, notification, office integration and conversion capabilities for documents (HTML & PDF) make this a haven for business users to take control of content within external and internal portals, custom applications and web sites.  When deploying WebCenter portal take time to think of areas in which IT can provide the "harness" for content to reside, then allow the business to manage any content items within the site, using the content foundation to ensure compliance with business rules and process.  This frees IT to work on more mission critical challenges and allows the business to respond in short order to emerging market needs. Collaboration – Native collaborative services and WebCenter spaces are a perfect match for business users who are looking to enable document sharing, discussions and social networking.  The ability to deploy the services is granular and on the basis of roles scoped to given areas of the system - much like the first C “content”.  This enables business analysts to design the roles required and IT to provision with peace of mind that users leveraging the collaborative services are only able to do so in explicitly designated areas of a site. Bottom line - business will not need to wait for IT, but cannot go outside of the scope that has been defined based on their roles. Contextual Data – Collaborative capabilities are most powerful when included within the context of business data.  The ability to supply business users with decision shaping data that they can include in various parts of a portal or portals, just as they would with content items, is one of the most powerful aspects of Oracle WebCenter.  Imagine a discussion about new store selection for a retail chain that re-purposes existing information from business intelligence services about various potential locations and or custom backend systems - presenting it directly in the context of the discussion.  If there are some data sources that are preexisting in your enterprise take a look at how they can be made into discrete offerings within the portal, then scoped to given business user roles for inclusion within collaborative activities. 2. Think Generically, Execute Specifically Constructs.  Anyone who has spent much time around me knows that I am obsessed with this word.  Why? Because Constructs offer immense power - more than APIs, Web Services or other technical capability. Constructs offer organizations the ability to leverage a platform's native characteristics to offer substantial business functionality - without writing code.  This concept becomes more powerful with the additional understanding of the concepts from the platform that an organization learns over time.  Let's take a look at an example of where an Oracle WebCenter construct can substantially reduce the time to get a subscription-based site out the door and into the hands of the end consumer. Imagine a site that allows members to subscribe to specific disciplines to access information and application data around that various discipline.  A space is a collection of secured pages within Oracle WebCenter.  Spaces are not only secured, but also default content stored within it to be scoped automatically to that space. Taking this a step further, Oracle WebCenter’s Activity Stream surfaces events, discussions and other activities that are scoped to the given user on the basis of their space affiliations.  In order to have a portal that would allow users to "subscribe" to information around various disciplines - spaces could be used out of the box to achieve this capability and without using any APIs or low level technical work to achieve this. 3. Make Governance Work for You Imagine driving down the street without the painted lines on the road.  The rules of the road are so ingrained in our minds, we often do not think about the process, but seemingly mundane lane markers are critical enablers. Lane markers allow us to travel at speeds that would be impossible if not for the agreed upon direction of flow. Additionally and more importantly, it allows people to act autonomously - going where they please at any given time. The return on the investment for mobility is high enough for people to buy into globally agreed up governance processes. In Oracle WebCenter we can use similar enablers to lane markers.  Our goal should be to enable the flow of information and provide end users with the ability to arrive at business solutions as needed, not on the basis of cumbersome processes that cannot meet the business needs in a timely fashion. How do we do this? Just as with "Segmentation of Duties" Oracle WebCenter technologies offer the opportunity to compartmentalize various business initiatives from each other within the system due to constructs and security that are available to use within the platform. For instance, when a WebCenter space is created, any content added within that space by default will be secured to that particular space and inherits meta data that is associated with a folder created for the space. Oracle WebCenter content uses meta data to support a broad range of rich ECM functionality and can automatically impart retention, workflow and other policies automatically on the basis of what has been defaulted for that space. Depending on your business needs, this paradigm will also extend to sub sections of a space, offering some interesting possibilities to enable automated management around content. An example may be press releases within a particular area of an extranet that require a five year retention period and need to the reviewed by marketing and legal before release.  The underlying content system will transparently take care of this process on the basis of the above rules, enabling peace of mind over unstructured data - which could otherwise become overwhelming. 4. Make Your First Project Your Second Imagine if Michael Phelps was competing in a swimming championship, but told right before his race that he had to use a brand new stroke.  There is no doubt that Michael is an outstanding swimmer, but chances are that he would like to have some time to get acquainted with the new stroke. New technologies should not be treated any differently.  Before jumping into the deep end it helps to take time to get to know the new approach - even though you may have been swimming thousands of times before. To quickly get a handle on Oracle WebCenter capabilities it can be helpful to deploy a sandbox for the team to use to share project documents, discussions and announcements in an effort to help the actual deployment get under way, while increasing everyone’s knowledge of the platform and its functionality that may be helpful down the road. Oracle Technology Network has made a pre-configured virtual machine available for download that can be a great starting point for this exercise. 5. Get to Know the Community If you are reading this blog post you have most certainly faced a software decision or challenge that was solved on the basis of a small piece of missing critical information - which took substantial research to discover.  Chances were also good that somewhere, someone had already come across this information and would have been excited to share it. There is no denying the power of passionate, connected users, sharing key tips around technology.  The Oracle WebCenter brand has a rich heritage that includes industry-leading technology and practitioners.  With the new Oracle WebCenter brand, opportunities to connect with these experts has become easier. Oracle WebCenter Blog Oracle Social Enterprise LinkedIn WebCenter Group Oracle WebCenter Twitter Oracle WebCenter Facebook Oracle User Groups Additionally, there are various Oracle WebCenter related blogs by an excellent grouping of services partners.

    Read the article

  • Slicing the EDG

    - by Antony Reynolds
    Different SOA Domain Configurations In this blog entry I would like to introduce three different configurations for a SOA environment.  I have omitted load balancers and OTD/OHS as they introduce a whole new round of discussion.  For each possible deployment architecture I have identified some of the advantages. Super Domain This is a single EDG style domain for everything needed for SOA/OSB.   It extends the standard EDG slightly but otherwise assumes a single “super” domain. This is basically the SOA EDG.  I have broken out JMS servers and Coherence servers to improve scalability and reduce dependencies. Key Points Separate JMS allows those servers to be kept up separately from rest of SOA Domain, allowing JMS clients to post messages even if rest of domain is unavailable. JMS servers are only used to host application specific JMS destinations, SOA/OSB JMS destinations remain in relevant SOA/OSB managed servers. Separate Coherence servers allow OSB cache to be offloaded from OSB servers. Use of Coherence by other components as a shared infrastructure data grid service. Coherence cluster may be managed by WLS but more likely run as a standalone Coherence cluster. Benefits Single Administration Point (1 Admin Server) Closely follows EDG with addition of application specific JMS servers and standalone Coherence servers for OSB caching and application specific caches. Coherence grid can be scaled independent of OSB/SOA. JMS queues provide for inter-application communication. Drawbacks Patching is an all or nothing affair. Startup time for SOA may be slow if large number of composites deployed. Multiple Domains This extends the EDG into multiple domains, allowing separate management and update of these domains.  I see this type of configuration quite often with customers, although some don't have OWSM, others don't have separate Coherence etc. SOA & BAM are kept in the same domain as little benefit is obtained by separating them. Key Points Separate JMS allows those servers to be kept up separately from rest of SOA Domain, allowing JMS clients to post messages even if other domains are unavailable. JMS servers are only used to host application specific JMS destinations, SOA/OSB JMS destinations remain in relevant SOA/OSB managed servers. Separate Coherence servers allow OSB cache to be offloaded from OSB servers. Use of Coherence by other components as a shared infrastructure data grid service. Coherence cluster may be managed by WLS but more likely run as a standalone Coherence cluster. Benefits Follows EDG but in separate domains and with addition of application specific JMS servers and standalone Coherence servers for OSB caching and application specific caches. Coherence grid can be scaled independent of OSB/SOA. JMS queues provide for inter-application communication. Patch lifecycle of OSB/SOA/JMS are no longer lock stepped. JMS may be kept running independently of other domains allowing applications to insert messages fro later consumption by SOA/OSB. OSB may be kept running independent of other domains, allowing service virtualization to continue independent of other domains availability. All domains use same OWSM policy store (MDS-WSM). Drawbacks Multiple domains to manage and configure. Multiple Admin servers (single view requires use of Grid Control) Multiple Admin servers/WSM clusters waste resources. Additional homes needed to enjoy benefits of separate patching. Cross domain trust needs setting up to simplify cross domain interactions. Startup time for SOA may be slow if large number of composites deployed. Shared Service Environment This model extends the previous multiple domain arrangement to provide a true shared service environment.This extends the previous model by allowing multiple additional SOA domains and/or other domains to take advantage of the shared services.  Only one non-shared domain is shown, but there could be multiple, allowing groups of applications to share patching independent of other application groups. Key Points Separate JMS allows those servers to be kept up separately from rest of SOA Domain, allowing JMS clients to post messages even if other domains are unavailable. JMS servers are only used to host application specific JMS destinations, SOA/OSB JMS destinations remain in relevant SOA/OSB managed servers. Separate Coherence servers allow OSB cache to be offloaded from OSB servers. Use of Coherence by other components as a shared infrastructure data grid service Coherence cluster may be managed by WLS but more likely run as a standalone Coherence cluster. Shared SOA Domain hosts Human Workflow Tasks BAM Common "utility" composites Single OSB domain provides "Enterprise Service Bus" All domains use same OWSM policy store (MDS-WSM) Benefits Follows EDG but in separate domains and with addition of application specific JMS servers and standalone Coherence servers for OSB caching and application specific caches. Coherence grid can be scaled independent of OSB/SOA. JMS queues provide for inter-application communication. Patch lifecycle of OSB/SOA/JMS are no longer lock stepped. JMS may be kept running independently of other domains allowing applications to insert messages fro later consumption by SOA/OSB. OSB may be kept running independent of other domains, allowing service virtualization to continue independent of other domains availability. All domains use same OWSM policy store (MDS-WSM). Supports large numbers of deployed composites in multiple domains. Single URL for Human Workflow end users. Single URL for BAM end users. Drawbacks Multiple domains to manage and configure. Multiple Admin servers (single view requires use of Grid Control) Multiple Admin servers/WSM clusters waste resources. Additional homes needed to enjoy benefits of separate patching. Cross domain trust needs setting up to simplify cross domain interactions. Human Workflow needs to be specially configured to point to shared services domain. Summary The alternatives in this blog allow for patching to have different impacts, depending on the model chosen.  Each organization must decide the tradeoffs for itself.  One extreme is to go for the shared services model and have one domain per SOA application.  This requires a lot of administration of the multiple domains.  The other extreme is to have a single super domain.  This makes the entire enterprise susceptible to an outage at the same time due to patching or other domain level changes.  Hopefully this blog will help your organization choose the right model for you.

    Read the article

  • Online ALTER TABLE in MySQL 5.6

    - by Marko Mäkelä
    This is the low-level view of data dictionary language (DDL) operations in the InnoDB storage engine in MySQL 5.6. John Russell gave a more high-level view in his blog post April 2012 Labs Release – Online DDL Improvements. MySQL before the InnoDB Plugin Traditionally, the MySQL storage engine interface has taken a minimalistic approach to data definition language. The only natively supported operations were CREATE TABLE, DROP TABLE and RENAME TABLE. Consider the following example: CREATE TABLE t(a INT); INSERT INTO t VALUES (1),(2),(3); CREATE INDEX a ON t(a); DROP TABLE t; The CREATE INDEX statement would be executed roughly as follows: CREATE TABLE temp(a INT, INDEX(a)); INSERT INTO temp SELECT * FROM t; RENAME TABLE t TO temp2; RENAME TABLE temp TO t; DROP TABLE temp2; You could imagine that the database could crash when copying all rows from the original table to the new one. For example, it could run out of file space. Then, on restart, InnoDB would roll back the huge INSERT transaction. To fix things a little, a hack was added to ha_innobase::write_row for committing the transaction every 10,000 rows. Still, it was frustrating that even a simple DROP INDEX would make the table unavailable for modifications for a long time. Fast Index Creation in the InnoDB Plugin of MySQL 5.1 MySQL 5.1 introduced a new interface for CREATE INDEX and DROP INDEX. The old table-copying approach can still be forced by SET old_alter_table=0. This interface is used in MySQL 5.5 and in the InnoDB Plugin for MySQL 5.1. Apart from the ability to do a quick DROP INDEX, the main advantage is that InnoDB will execute a merge-sort algorithm before inserting the index records into each index that is being created. This should speed up the insert into the secondary index B-trees and potentially result in a better B-tree fill factor. The 5.1 ALTER TABLE interface was not perfect. For example, DROP FOREIGN KEY still invoked the table copy. Renaming columns could conflict with InnoDB foreign key constraints. Combining ADD KEY and DROP KEY in ALTER TABLE was problematic and not atomic inside the storage engine. The ALTER TABLE interface in MySQL 5.6 The ALTER TABLE storage engine interface was completely rewritten in MySQL 5.6. Instead of introducing a method call for every conceivable operation, MySQL 5.6 introduced a handful of methods, and data structures that keep track of the requested changes. In MySQL 5.6, online ALTER TABLE operation can be requested by specifying LOCK=NONE. Also LOCK=SHARED and LOCK=EXCLUSIVE are available. The old-style table copying can be requested by ALGORITHM=COPY. That one will require at least LOCK=SHARED. From the InnoDB point of view, anything that is possible with LOCK=EXCLUSIVE is also possible with LOCK=SHARED. Most ALGORITHM=INPLACE operations inside InnoDB can be executed online (LOCK=NONE). InnoDB will always require an exclusive table lock in two phases of the operation. The execution phases are tied to a number of methods: handler::check_if_supported_inplace_alter Checks if the storage engine can perform all requested operations, and if so, what kind of locking is needed. handler::prepare_inplace_alter_table InnoDB uses this method to set up the data dictionary cache for upcoming CREATE INDEX operation. We need stubs for the new indexes, so that we can keep track of changes to the table during online index creation. Also, crash recovery would drop any indexes that were incomplete at the time of the crash. handler::inplace_alter_table In InnoDB, this method is used for creating secondary indexes or for rebuilding the table. This is the ‘main’ phase that can be executed online (with concurrent writes to the table). handler::commit_inplace_alter_table This is where the operation is committed or rolled back. Here, InnoDB would drop any indexes, rename any columns, drop or add foreign keys, and finalize a table rebuild or index creation. It would also discard any logs that were set up for online index creation or table rebuild. The prepare and commit phases require an exclusive lock, blocking all access to the table. If MySQL times out while upgrading the table meta-data lock for the commit phase, it will roll back the ALTER TABLE operation. In MySQL 5.6, data definition language operations are still not fully atomic, because the data dictionary is split. Part of it is inside InnoDB data dictionary tables. Part of the information is only available in the *.frm file, which is not covered by any crash recovery log. But, there is a single commit phase inside the storage engine. Online Secondary Index Creation It may occur that an index needs to be created on a new column to speed up queries. But, it may be unacceptable to block modifications on the table while creating the index. It turns out that it is conceptually not so hard to support online index creation. All we need is some more execution phases: Set up a stub for the index, for logging changes. Scan the table for index records. Sort the index records. Bulk load the index records. Apply the logged changes. Replace the stub with the actual index. Threads that modify the table will log the operations to the logs of each index that is being created. Errors, such as log overflow or uniqueness violations, will only be flagged by the ALTER TABLE thread. The log is conceptually similar to the InnoDB change buffer. The bulk load of index records will bypass record locking. We still generate redo log for writing the index pages. It would suffice to log page allocations only, and to flush the index pages from the buffer pool to the file system upon completion. Native ALTER TABLE Starting with MySQL 5.6, InnoDB supports most ALTER TABLE operations natively. The notable exceptions are changes to the column type, ADD FOREIGN KEY except when foreign_key_checks=0, and changes to tables that contain FULLTEXT indexes. The keyword ALGORITHM=INPLACE is somewhat misleading, because certain operations cannot be performed in-place. For example, changing the ROW_FORMAT of a table requires a rebuild. Online operation (LOCK=NONE) is not allowed in the following cases: when adding an AUTO_INCREMENT column, when the table contains FULLTEXT indexes or a hidden FTS_DOC_ID column, or when there are FOREIGN KEY constraints referring to the table, with ON…CASCADE or ON…SET NULL option. The FOREIGN KEY limitations are needed, because MySQL does not acquire meta-data locks on the child or parent tables when executing SQL statements. Theoretically, InnoDB could support operations like ADD COLUMN and DROP COLUMN in-place, by lazily converting the table to a newer format. This would require that the data dictionary keep multiple versions of the table definition. For simplicity, we will copy the entire table, even for DROP COLUMN. The bulk copying of the table will bypass record locking and undo logging. For facilitating online operation, a temporary log will be associated with the clustered index of table. Threads that modify the table will also write the changes to the log. When altering the table, we skip all records that have been marked for deletion. In this way, we can simply discard any undo log records that were not yet purged from the original table. Off-page columns, or BLOBs, are an important consideration. We suspend the purge of delete-marked records if it would free any off-page columns from the old table. This is because the BLOBs can be needed when applying changes from the log. We have special logging for handling the ROLLBACK of an INSERT that inserted new off-page columns. This is because the columns will be freed at rollback.

    Read the article

  • Odd tcp deadlock under windows

    - by John Robertson
    We are moving large amounts of data on a LAN and it has to happen very rapidly and reliably. Currently we use windows TCP as implemented in C++. Using large (synchronous) sends moves the data much faster than a bunch of smaller (synchronous) sends but will frequently deadlock for large gaps of time (.15 seconds) causing the overall transfer rate to plummet. This deadlock happens in very particular circumstances which makes me believe it should be preventable altogether. More importantly if we don't really know the cause we don't really know it won't happen some time with smaller sends anyway. Can anyone explain this deadlock? Deadlock description (OK, zombie-locked, it isn't dead, but for .15 or so seconds it stops, then starts again) The receiving side sends an ACK. The sending side sends a packet containing the end of a message (push flag is set) The call to socket.recv takes about .15 seconds(!) to return About the time the call returns an ACK is sent by the receiving side The the next packet from the sender is finally sent (why is it waiting? the tcp window is plenty big) The odd thing about (3) is that typically that call doesn't take much time at all and receives exactly the same amount of data. On a 2Ghz machine that's 300 million instructions worth of time. I am assuming the call doesn't (heaven forbid) wait for the received data to be acked before it returns, so the ack must be waiting for the call to return, or both must be delayed by something else. The problem NEVER happens when there is a second packet of data (part of the same message) arriving between 1 and 2. That part very clearly makes it sound like it has to do with the fact that windows TCP will not send back a no-data ACK until either a second packet arrives or a 200ms timer expires. However the delay is less than 200 ms (its more like 150 ms). The third unseemly character (and to my mind the real culprit) is (5). Send is definitely being called well before that .15 seconds is up, but the data NEVER hits the wire before that ack returns. That is the most bizarre part of this deadlock to me. Its not a tcp blockage because the TCP window is plenty big since we set SO_RCVBUF to something like 500*1460 (which is still under a meg). The data is coming in very fast (basically there is a loop spinning out data via send) so the buffer should fill almost immediately. According to msdn the buffer being full and at least one pending send should cause the data to be sent (though in another place it mentions that there various "heuristics" used in deciding when a send hits the wire). Anway, why the sender doesn't actually send more data during that .15 second pause is the most bizarre part to me. The information above was captured on the receiving side via wireshark (except of course the socket.recv return times which were logged in a text file). We tried changing the send buffer to zero and turning off Nagle on the sender (yes, I know Nagle is about not sending small packets - but we tried turning Nagle off in case that was part of the unstated "heuristics" affecting whether the message would be posted to the wire. Technically microsoft's Nagle is that a small packet isn't sent if the buffer is full and there is an outstanding ACK, so it seemed like a possibility).

    Read the article

  • Xcode newb -- #include can't find my file

    - by morgancodes
    I'm trying to get a third party audio library (STK) working inside Xcode. Along with the standard .h files, many of the implementation files include a file called SKINI.msg. SKINI.msg is in the same directory as all of the header files. The header files are getting included fine, but the compiler complains that it can't find SKINI.msg. What do I need to do to get Xcode to happily include SKINI.msg? Edit: Here's the contents of SKINI.msg: /*********************************************************/ /* Definition of SKINI Message Types and Special Symbols Synthesis toolKit Instrument Network Interface These symbols should have the form: \c __SK_<name>_ where <name> is the string used in the SKINI stream. by Perry R. Cook, 1995 - 2010. */ /*********************************************************/ namespace stk { #define NOPE -32767 #define YEP 1 #define SK_DBL -32766 #define SK_INT -32765 #define SK_STR -32764 #define __SK_Exit_ 999 /***** MIDI COMPATIBLE MESSAGES *****/ /*** (Status bytes for channel=0) ***/ #define __SK_NoteOff_ 128 #define __SK_NoteOn_ 144 #define __SK_PolyPressure_ 160 #define __SK_ControlChange_ 176 #define __SK_ProgramChange_ 192 #define __SK_AfterTouch_ 208 #define __SK_ChannelPressure_ __SK_AfterTouch_ #define __SK_PitchWheel_ 224 #define __SK_PitchBend_ __SK_PitchWheel_ #define __SK_PitchChange_ 49 #define __SK_Clock_ 248 #define __SK_SongStart_ 250 #define __SK_Continue_ 251 #define __SK_SongStop_ 252 #define __SK_ActiveSensing_ 254 #define __SK_SystemReset_ 255 #define __SK_Volume_ 7 #define __SK_ModWheel_ 1 #define __SK_Modulation_ __SK_ModWheel_ #define __SK_Breath_ 2 #define __SK_FootControl_ 4 #define __SK_Portamento_ 65 #define __SK_Balance_ 8 #define __SK_Pan_ 10 #define __SK_Sustain_ 64 #define __SK_Damper_ __SK_Sustain_ #define __SK_Expression_ 11 #define __SK_AfterTouch_Cont_ 128 #define __SK_ModFrequency_ __SK_Expression_ #define __SK_ProphesyRibbon_ 16 #define __SK_ProphesyWheelUp_ 2 #define __SK_ProphesyWheelDown_ 3 #define __SK_ProphesyPedal_ 18 #define __SK_ProphesyKnob1_ 21 #define __SK_ProphesyKnob2_ 22 /*** Instrument Family Specific ***/ #define __SK_NoiseLevel_ __SK_FootControl_ #define __SK_PickPosition_ __SK_FootControl_ #define __SK_StringDamping_ __SK_Expression_ #define __SK_StringDetune_ __SK_ModWheel_ #define __SK_BodySize_ __SK_Breath_ #define __SK_BowPressure_ __SK_Breath_ #define __SK_BowPosition_ __SK_PickPosition_ #define __SK_BowBeta_ __SK_BowPosition_ #define __SK_ReedStiffness_ __SK_Breath_ #define __SK_ReedRestPos_ __SK_FootControl_ #define __SK_FluteEmbouchure_ __SK_Breath_ #define __SK_JetDelay_ __SK_FluteEmbouchure_ #define __SK_LipTension_ __SK_Breath_ #define __SK_SlideLength_ __SK_FootControl_ #define __SK_StrikePosition_ __SK_PickPosition_ #define __SK_StickHardness_ __SK_Breath_ #define __SK_TrillDepth_ 1051 #define __SK_TrillSpeed_ 1052 #define __SK_StrumSpeed_ __SK_TrillSpeed_ #define __SK_RollSpeed_ __SK_TrillSpeed_ #define __SK_FilterQ_ __SK_Breath_ #define __SK_FilterFreq_ 1062 #define __SK_FilterSweepRate_ __SK_FootControl_ #define __SK_ShakerInst_ 1071 #define __SK_ShakerEnergy_ __SK_Breath_ #define __SK_ShakerDamping_ __SK_ModFrequency_ #define __SK_ShakerNumObjects_ __SK_FootControl_ #define __SK_Strumming_ 1090 #define __SK_NotStrumming_ 1091 #define __SK_Trilling_ 1092 #define __SK_NotTrilling_ 1093 #define __SK_Rolling_ __SK_Strumming_ #define __SK_NotRolling_ __SK_NotStrumming_ #define __SK_PlayerSkill_ 2001 #define __SK_Chord_ 2002 #define __SK_ChordOff_ 2003 #define __SK_SINGER_FilePath_ 3000 #define __SK_SINGER_Frequency_ 3001 #define __SK_SINGER_NoteName_ 3002 #define __SK_SINGER_Shape_ 3003 #define __SK_SINGER_Glot_ 3004 #define __SK_SINGER_VoicedUnVoiced_ 3005 #define __SK_SINGER_Synthesize_ 3006 #define __SK_SINGER_Silence_ 3007 #define __SK_SINGER_VibratoAmt_ __SK_ModWheel_ #define __SK_SINGER_RndVibAmt_ 3008 #define __SK_SINGER_VibFreq_ __SK_Expression_ } // stk namespace And here's what the compiler said: CompileC build/StkCompile.build/Debug-iphonesimulator/StkCompile.build/Objects-normal/i386/BandedWG.o "../../../Data/study/iPhone class/stk-4.4.2/src/BandedWG.cpp" normal i386 c++ com.apple.compilers.gcc.4_2 cd /Users/morganpackard/Desktop/trashme/StkCompile setenv LANG en_US.US-ASCII setenv PATH "/Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin:/Developer/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin" /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin/gcc-4.2 -x c++ -arch i386 -fmessage-length=0 -pipe -Wno-trigraphs -fpascal-strings -fasm-blocks -O0 -Wreturn-type -Wunused-variable -D__IPHONE_OS_VERSION_MIN_REQUIRED=30000 -isysroot /Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator3.1.2.sdk -fvisibility=hidden -fvisibility-inlines-hidden -mmacosx-version-min=10.5 -gdwarf-2 -iquote /Users/morganpackard/Desktop/trashme/StkCompile/build/StkCompile.build/Debug-iphonesimulator/StkCompile.build/StkCompile-generated-files.hmap -I/Users/morganpackard/Desktop/trashme/StkCompile/build/StkCompile.build/Debug-iphonesimulator/StkCompile.build/StkCompile-own-target-headers.hmap -I/Users/morganpackard/Desktop/trashme/StkCompile/build/StkCompile.build/Debug-iphonesimulator/StkCompile.build/StkCompile-all-target-headers.hmap -iquote /Users/morganpackard/Desktop/trashme/StkCompile/build/StkCompile.build/Debug-iphonesimulator/StkCompile.build/StkCompile-project-headers.hmap -F/Users/morganpackard/Desktop/trashme/StkCompile/build/Debug-iphonesimulator -I/Users/morganpackard/Desktop/trashme/StkCompile/build/Debug-iphonesimulator/include -I/Users/morganpackard/Desktop/trashme/StkCompile/build/StkCompile.build/Debug-iphonesimulator/StkCompile.build/DerivedSources/i386 -I/Users/morganpackard/Desktop/trashme/StkCompile/build/StkCompile.build/Debug-iphonesimulator/StkCompile.build/DerivedSources -include /var/folders/dx/dxSUSyOJFv0MBEh9qC1oJ++++TI/-Caches-/com.apple.Xcode.501/SharedPrecompiledHeaders/StkCompile_Prefix-bopqzvwpuyqltrdumgtjtfrjvtzb/StkCompile_Prefix.pch -c "/Users/morganpackard/Desktop/trashme/StkCompile/../../../Data/study/iPhone class/stk-4.4.2/src/BandedWG.cpp" -o /Users/morganpackard/Desktop/trashme/StkCompile/build/StkCompile.build/Debug-iphonesimulator/StkCompile.build/Objects-normal/i386/BandedWG.o /Users/morganpackard/Desktop/trashme/StkCompile/../../../Data/study/iPhone class/stk-4.4.2/src/BandedWG.cpp:33:21: error: SKINI.msg: No such file or directory /Users/morganpackard/Desktop/trashme/StkCompile/../../../Data/study/iPhone class/stk-4.4.2/src/BandedWG.cpp: In member function 'virtual void stk::BandedWG::controlChange(int, stk::StkFloat)': /Users/morganpackard/Desktop/trashme/StkCompile/../../../Data/study/iPhone class/stk-4.4.2/src/BandedWG.cpp:326: error: '__SK_BowPressure_' was not declared in this scope /Users/morganpackard/Desktop/trashme/StkCompile/../../../Data/study/iPhone class/stk-4.4.2/src/BandedWG.cpp:342: error: '__SK_AfterTouch_Cont_' was not declared in this scope /Users/morganpackard/Desktop/trashme/StkCompile/../../../Data/study/iPhone class/stk-4.4.2/src/BandedWG.cpp:349: error: '__SK_ModWheel_' was not declared in this scope /Users/morganpackard/Desktop/trashme/StkCompile/../../../Data/study/iPhone class/stk-4.4.2/src/BandedWG.cpp:357: error: '__SK_ModFrequency_' was not declared in this scope /Users/morganpackard/Desktop/trashme/StkCompile/../../../Data/study/iPhone class/stk-4.4.2/src/BandedWG.cpp:359: error: '__SK_Sustain_' was not declared in this scope /Users/morganpackard/Desktop/trashme/StkCompile/../../../Data/study/iPhone class/stk-4.4.2/src/BandedWG.cpp:363: error: '__SK_Portamento_' was not declared in this scope /Users/morganpackard/Desktop/trashme/StkCompile/../../../Data/study/iPhone class/stk-4.4.2/src/BandedWG.cpp:367: error: '__SK_ProphesyRibbon_' was not declared in this scope

    Read the article

  • How to overcome shortcomings in reporting from EAV database?

    - by David Archer
    The major shortcomings with Entity-Attribute-Value database designs in SQL all seem to be related to being able to query and report on the data efficiently and quickly. Most of the information I read on the subject warn against implementing EAV due to these problems and the commonality of querying/reporting for almost all applications. I am currently designing a system where almost all the fields necessary for data storage are not known at design/compile time and are defined by the end-user of the system. EAV seems like a good fit for this requirement but due to the problems I've read about, I am hesitant in implementing it as there are also some pretty heavy reporting requirements for this system as well. I think I've come up with a way around this but would like to pose the question to the SO community. Given that typical normalized database (OLTP) still isn't always the best option for running reports, a good practice seems to be having a "reporting" database (OLAP) where the data from the normalized database is copied to, indexed extensively, and possibly denormalized for easier querying. Could the same idea be used to work around the shortcomings of an EAV design? The main downside I see are the increased complexity of transferring the data from the EAV database to reporting as you may end up having to alter the tables in the reporting database as new fields are defined in the EAV database. But that is hardly impossible and seems to be an acceptable tradeoff for the increased flexibility given by the EAV design. This downside also exists if I use a non-SQL data store (i.e. CouchDB or similar) for the main data storage since all the standard reporting tools are expecting a SQL backend to query against. Do the issues with EAV systems mostly go away if you have a seperate reporting database for querying? EDIT: Thanks for the comments so far. One of the important things about the system I'm working on it that I'm really only talking about using EAV for one of the entities, not everything in the system. The whole gist of the system is to be able to pull data from multiple disparate sources that are not known ahead of time and crunch the data to come up with some "best known" data about a particular entity. So every "field" I'm dealing with is multi-valued and I'm also required to track history for each. The normalized design for this ends up being 1 table per field which makes querying it kind of painful anyway. Here are the table schemas and sample data I'm looking at (obviously changed from what I'm working on but I think it illustrates the point well): EAV Tables Person ------------------- - Id - Name - ------------------- - 123 - Joe Smith - ------------------- Person_Value ------------------------------------------------------------------- - PersonId - Source - Field - Value - EffectiveDate - ------------------------------------------------------------------- - 123 - CIA - HomeAddress - 123 Cherry Ln - 2010-03-26 - - 123 - DMV - HomeAddress - 561 Stoney Rd - 2010-02-15 - - 123 - FBI - HomeAddress - 676 Lancas Dr - 2010-03-01 - ------------------------------------------------------------------- Reporting Table Person_Denormalized ---------------------------------------------------------------------------------------- - Id - Name - HomeAddress - HomeAddress_Confidence - HomeAddress_EffectiveDate - ---------------------------------------------------------------------------------------- - 123 - Joe Smith - 123 Cherry Ln - 0.713 - 2010-03-26 - ---------------------------------------------------------------------------------------- Normalized Design Person ------------------- - Id - Name - ------------------- - 123 - Joe Smith - ------------------- Person_HomeAddress ------------------------------------------------------ - PersonId - Source - Value - Effective Date - ------------------------------------------------------ - 123 - CIA - 123 Cherry Ln - 2010-03-26 - - 123 - DMV - 561 Stoney Rd - 2010-02-15 - - 123 - FBI - 676 Lancas Dr - 2010-03-01 - ------------------------------------------------------ The "Confidence" field here is generated using logic that cannot be expressed easily (if at all) using SQL so my most common operation besides inserting new values will be pulling ALL data about a person for all fields so I can generate the record for the reporting table. This is actually easier in the EAV model as I can do a single query. In the normalized design, I end up having to do 1 query per field to avoid a massive cartesian product from joining them all together.

    Read the article

  • Java PHP posting using URLConnector, PHP file doesn't seem to receive parameters

    - by Emdiesse
    Hi there, I am trying to post some simple string data to my php script via a java application. My PHP script works fine when I enter the data myself using a web browser (newvector.php?x1=&y1=...) However using my java application the php file does not seem to pick up these parameters, in fact I don't even know if they are sending at all because if I comment out on of the parameters in the java code when I am writing to dta it doesn't actually return, you must enter 6 parameters. newvector.php if(!isset($_GET['x1']) || !isset($_GET['y1']) || !isset($_GET['t1']) || !isset($_GET['x2']) || !isset($_GET['y2']) || !isset($_GET['t2'])){ die("You must include 6 parameters within the URL: x1, y1, t1, x2, y2, t2"); } $x1 = $_GET['x1']; $x2 = $_GET['x2']; $y1 = $_GET['y1']; $y2 = $_GET['y2']; $t1 = $_GET['t1']; $t2 = $_GET['t2']; $insert = " INSERT INTO vectors( x1, x2, y1, y2, t1, t2 ) VALUES ( '$x1', '$x2', '$y1', '$y2', '$t1', '$t2' ) "; if(!mysql_query($insert, $conn)){ die('Error: ' . mysql_error()); } echo "Submitted Data x1=".$x1." y1=".$y1." t1=".$t1." x2=".$x2." y2=".$y2." t2=".$t2; include 'db_disconnect.php'; ?> The java code else if (action.equals("Play")) { for (int i = 0; i < 5; i++) { // data.size() String x1, y1, t1, x2, y2, t2 = ""; String date = "2010-04-03 "; String ms = ".0"; x1 = data.elementAt(i)[1]; y1 = data.elementAt(i)[0]; t1 = date + data.elementAt(i)[2] + ms; x2 = data.elementAt(i)[4]; y2 = data.elementAt(i)[3]; t2 = date + data.elementAt(i)[5] + ms; try { //Create Post String String dta = URLEncoder.encode("x1", "UTF-8") + "=" + URLEncoder.encode(x1, "UTF-8"); dta += "&" + URLEncoder.encode("y1", "UTF-8") + "=" + URLEncoder.encode(y1, "UTF-8"); dta += "&" + URLEncoder.encode("t1", "UTF-8") + "=" + URLEncoder.encode(t1, "UTF-8"); dta += "&" + URLEncoder.encode("x2", "UTF-8") + "=" + URLEncoder.encode(x2, "UTF-8"); dta += "&" + URLEncoder.encode("y2", "UTF-8") + "=" + URLEncoder.encode(y2, "UTF-8"); dta += "&" + URLEncoder.encode("t2", "UTF-8") + "=" + URLEncoder.encode(t2, "UTF-8"); System.out.println(dta); // Send Data To Page URL url = new URL("http://localhost/newvector.php"); URLConnection conn = url.openConnection(); conn.setDoOutput(true); OutputStreamWriter wr = new OutputStreamWriter(conn.getOutputStream()); wr.write(dta); wr.flush(); // Get The Response BufferedReader rd = new BufferedReader(new InputStreamReader(conn.getInputStream())); String line; while ((line = rd.readLine()) != null) { System.out.println(line); //you Can Break The String Down Here } wr.close(); rd.close(); } catch (Exception exc) { System.out.println("Hmmm!!! " + exc.getMessage()); } }

    Read the article

  • "Invalid form control" only in Google Chrome

    - by MFB
    The code below works well in Safari but in Chrome and Firefox the form will not submit. Chrome console logs the error An invalid form control with name='' is not focusable. Any ideas? Note that whilst the controls below do not have names, they should have names at the time of submission, populated by the Javascript below. The form DOES work in Safari. <form method="POST" action="/add/bundle"> <p> <input type="text" name="singular" placeholder="Singular Name" required> <input type="text" name="plural" placeholder="Plural Name" required> </p> <h4>Asset Fields</h4> <div class="template-view" id="template_row" style="display:none"> <input type="text" data-keyname="name" placeholder="Field Name" required> <input type="text" data-keyname="hint" placeholder="Hint"> <select data-keyname="fieldtype" required> <option value="">Field Type...</option> <option value="Email">Email</option> <option value="Password">Password</option> <option value="Text">Text</option> </select> <input type="checkbox" data-keyname="required" value="true"> Required <input type="checkbox" data-keyname="search" value="true"> Searchable <input type="checkbox" data-keyname="readonly" value="true"> ReadOnly <input type="checkbox" data-keyname="autocomplete" value="true"> AutoComplete <input type="radio" data-keyname="label" value="label" name="label"> Label <input type="radio" data-keyname="unique" value="unique" name="unique"> Unique <button class="add" type="button">+</button> <button class="remove" type="button">-</button> </div> <div id="target_list"></div> <p><input type="submit" name="form.submitted" value="Submit" autofocus></p> </form> <script> function addDiv() { var pCount = $('.template-view', '#target_list').length; var pClone = $('#template_row').clone(); $('select, input, textarea', pClone).each(function(idx, el){ $el = $(this); if ((el).type == 'radio'){ $el.attr('value', pCount + '_' + $el.data('keyname')); } else { $el.attr('name', pCount + '_' + $el.data('keyname')); }; }); $('#target_list').append(pClone); pClone.show(); } function removeDiv(elem){ var pCount = $('.template-view', '#target_list').length; if (pCount != 1) { $(elem).closest('.template-view').remove(); } }; $('.add').live('click', function(){ addDiv(); }); $('.remove').live('click', function(){ removeDiv(this); }); $(document).ready(addDiv); </script>

    Read the article

  • Compress file to bytes for uploading to SQL Server

    - by Chris
    I am trying to zip files to an SQL Server database table. I can't ensure that the user of the tool has write priveledges on the source file folder so I want to load the file into memory, compress it to an array of bytes and insert it into my database. This below does not work. class ZipFileToSql { public event MessageHandler Message; protected virtual void OnMessage(string msg) { if (Message != null) { MessageHandlerEventArgs args = new MessageHandlerEventArgs(); args.Message = msg; Message(this, args); } } private int sourceFileId; private SqlConnection Conn; private string PathToFile; private bool isExecuting; public bool IsExecuting { get { return isExecuting; } } public int SourceFileId { get { return sourceFileId; } } public ZipFileToSql(string pathToFile, SqlConnection conn) { isExecuting = false; PathToFile = pathToFile; Conn = conn; } public void Execute() { isExecuting = true; byte[] data; byte[] cmpData; //create temp zip file OnMessage("Reading file to memory"); FileStream fs = File.OpenRead(PathToFile); data = new byte[fs.Length]; ReadWholeArray(fs, data); OnMessage("Zipping file to memory"); MemoryStream ms = new MemoryStream(); GZipStream zip = new GZipStream(ms, CompressionMode.Compress, true); zip.Write(data, 0, data.Length); cmpData = new byte[ms.Length]; ReadWholeArray(ms, cmpData); OnMessage("Saving file to database"); using (SqlCommand cmd = Conn.CreateCommand()) { cmd.CommandText = @"MergeFileUploads"; cmd.CommandType = CommandType.StoredProcedure; //cmd.Parameters.Add("@File", SqlDbType.VarBinary).Value = data; cmd.Parameters.Add("@File", SqlDbType.VarBinary).Value = cmpData; SqlParameter p = new SqlParameter(); p.ParameterName = "@SourceFileId"; p.Direction = ParameterDirection.Output; p.SqlDbType = SqlDbType.Int; cmd.Parameters.Add(p); cmd.ExecuteNonQuery(); sourceFileId = (int)p.Value; } OnMessage("File Saved"); isExecuting = false; } private void ReadWholeArray(Stream stream, byte[] data) { int offset = 0; int remaining = data.Length; float Step = data.Length / 100; float NextStep = data.Length - Step; while (remaining > 0) { int read = stream.Read(data, offset, remaining); if (read <= 0) throw new EndOfStreamException (String.Format("End of stream reached with {0} bytes left to read", remaining)); remaining -= read; offset += read; if (remaining < NextStep) { NextStep -= Step; } } } }

    Read the article

  • ASP.NET how to implement IServiceLayer

    - by rockinthesixstring
    I'm trying to follow the tutorial found here to implement a service layer in my MVC application. What I can't figure out is how to wire it all up. here's what I have so far. IUserRepository.vb Namespace Data Public Interface IUserRepository Sub AddUser(ByVal openid As String) Sub UpdateUser(ByVal id As Integer, ByVal about As String, ByVal birthdate As DateTime, ByVal openid As String, ByVal regionid As Integer, ByVal username As String, ByVal website As String) Sub UpdateUserReputation(ByVal id As Integer, ByVal AmountOfReputation As Integer) Sub DeleteUser(ByVal id As Integer) Function GetAllUsers() As IList(Of User) Function GetUserByID(ByVal id As Integer) As User Function GetUserByOpenID(ByVal openid As String) As User End Interface End Namespace UserRepository.vb Namespace Data Public Class UserRepository : Implements IUserRepository Private dc As DataDataContext Public Sub New() dc = New DataDataContext End Sub #Region "IUserRepository Members" Public Sub AddUser(ByVal openid As String) Implements IUserRepository.AddUser Dim user = New User user.LastSeen = DateTime.Now user.MemberSince = DateTime.Now user.OpenID = openid user.Reputation = 0 user.UserName = String.Empty dc.Users.InsertOnSubmit(user) dc.SubmitChanges() End Sub Public Sub UpdateUser(ByVal id As Integer, ByVal about As String, ByVal birthdate As Date, ByVal openid As String, ByVal regionid As Integer, ByVal username As String, ByVal website As String) Implements IUserRepository.UpdateUser Dim user = (From u In dc.Users Where u.ID = id Select u).Single user.About = about user.BirthDate = birthdate user.LastSeen = DateTime.Now user.OpenID = openid user.RegionID = regionid user.UserName = username user.WebSite = website dc.SubmitChanges() End Sub Public Sub UpdateUserReputation(ByVal id As Integer, ByVal AmountOfReputation As Integer) Implements IUserRepository.UpdateUserReputation Dim user = (From u In dc.Users Where u.ID = id Select u).FirstOrDefault ''# Simply take the current reputation from the select statement ''# and add the proper "AmountOfReputation" user.Reputation = user.Reputation + AmountOfReputation dc.SubmitChanges() End Sub Public Sub DeleteUser(ByVal id As Integer) Implements IUserRepository.DeleteUser Dim user = (From u In dc.Users Where u.ID = id Select u).FirstOrDefault dc.Users.DeleteOnSubmit(user) dc.SubmitChanges() End Sub Public Function GetAllUsers() As System.Collections.Generic.IList(Of User) Implements IUserRepository.GetAllUsers Dim users = From u In dc.Users Select u Return users.ToList End Function Public Function GetUserByID(ByVal id As Integer) As User Implements IUserRepository.GetUserByID Dim user = (From u In dc.Users Where u.ID = id Select u).FirstOrDefault Return user End Function Public Function GetUserByOpenID(ByVal openid As String) As User Implements IUserRepository.GetUserByOpenID Dim user = (From u In dc.Users Where u.OpenID = openid Select u).FirstOrDefault Return user End Function #End Region End Class End Namespace IUserService.vb Namespace Data Interface IUserService End Interface End Namespace UserService.vb Namespace Data Public Class UserService : Implements IUserService Private _ValidationDictionary As IValidationDictionary Private _repository As IUserRepository Public Sub New(ByVal validationDictionary As IValidationDictionary, ByVal repository As IUserRepository) _ValidationDictionary = validationDictionary _repository = repository End Sub Protected Function ValidateUser(ByVal UserToValidate As User) As Boolean Dim isValid As Boolean = True If UserToValidate.OpenID.Trim().Length = 0 Then _ValidationDictionary.AddError("OpenID", "OpenID is Required") isValid = False End If If UserToValidate.MemberSince = Nothing Then _ValidationDictionary.AddError("MemberSince", "MemberSince is Required") isValid = False End If If UserToValidate.LastSeen = Nothing Then _ValidationDictionary.AddError("LastSeen", "LastSeen is Required") isValid = False End If If UserToValidate.Reputation = Nothing Then _ValidationDictionary.AddError("Reputation", "Reputation is Required") isValid = False End If Return isValid End Function End Class End Namespace I have also wired up the IValidationDictionary.vb and the ModelStateWrapper.vb as described in the article above. What I'm having a problem with is actually implementing it in my controller. My controller looks something like this. Public Class UsersController : Inherits BaseController Private UserService As Data.IUserService Public Sub New() UserService = New Data.UserService(New Data.ModelStateWrapper(Me.ModelState), New Data.UserRepository) End Sub Public Sub New(ByVal service As Data.IUserService) UserService = service End Sub .... End Class however on the line that says Public Sub New(ByVal service As Data.IUserService) I'm getting an error 'service' cannot expose type 'Data.IUserService' outside the project through class 'UsersController' So my question is TWO PARTS How can I properly implement a Service Layer in my application using the concepts from that article? Should there be any content within my IUserService.vb?

    Read the article

  • Looking for best practise for writing a serial device communication app

    - by cdotlister
    I am pretty new to serial comms, but would like advise on how to best achieve a robust application which speak to and listens to a serial device. I have managed to make use of System.IO.SerialPort, and successfully connected to, sent data to and recieved from my device. The way things work is this. My application connects to the Com Port and opens the port.... I then connect my device to the com port, and it detects a connection to the PC, so sends a bit of text. it's really just copyright info, as well as the version of the firmware. I don't do anything with that, except display it in my 'activity' window. The device then waits. I can then query information, but sending a command such as 'QUERY PARAMETER1'. It then replies with something like: 'QUERY PARAMETER1\r\n\r\n76767\r\n\r\n' I then process that. I can then update it by sending 'SET PARAMETER1 12345', and it will reply with 'QUERY PARAMETER1\r\n\r\n12345\r\n\r\n'. All pretty basic. So, what I have done is created a Communication Class. this call is called in it's own thread, and sends data back to the main form... and also allows me to send messages to it. Sending data is easy. Recieving is a bit more tricky. I have employed the use of the datarecieved event, and when ever data comes in, I echo that to my screen. My problem is this: When I send a command, I feel I am being very dodgy in my handling. What I am doing is, lets say I am sending 'QUERY PARAMETER1'. I send the command to the device, I then put 'PARAMETER1' into a global variable, and I do a Thread.Sleep(100). On the data received, I then have a bit of logic that checks the incoming data, and sees if the string CONTAINS the value in the global variable. As the reply may be 'QUERY PARAMETER1\r\n\r\n76767\r\n\r\n', it sees that it contains my parameter, parses the string, and returns the value I am looking for, but placing it into another global variable. My sending method was sleeping for 100ms. It then wakes, and checks the returned global variable. If it has data... then I'm happy, and I process the data. Problem is... if the sleep is too short.. it will fail. And I feel it's flaky.. putting stuff into variables.. then waiting... The other option is to use ReadLine instead, but that's very blocking. So I remove the data received method, and instead... just send the data... then call ReadLine(). That may give me better results. There's no time, except when we connect initially, that data comes from the device, without me requesting it. So, maybe ReadLine will be simpler and safer? Is this known as 'Blocking' reads? Also, can I set a timeout? Hopefully someone can guide me.

    Read the article

  • How can I disable 'output escaping' in minidom

    - by William
    I'm trying to build an xml document from scratch using xml.dom.minidom. Everything was going well until I tried to make a text node with a ® (Registered Trademark) symbol in. My objective is for when I finally hit print mydoc.toxml() this particular node will actually contain a ® symbol. First I tried: import xml.dom.minidom as mdom data = '®' which gives the rather obvious error of: File "C:\src\python\HTMLGen\test2.py", line 3 SyntaxError: Non-ASCII character '\xae' in file C:\src\python\HTMLGen\test2.py on line 3, but no encoding declared; see http://www.python.or g/peps/pep-0263.html for details I have of course also tried changing the encoding of my python script to 'utf-8' using the opening line comment method, but this didn't help. So I thought import xml.dom.minidom as mdom data = '&#174;' #Both accepted xml encodings for registered trademark data = '&reg;' text = mdom.Text() text.data = data print data print text.toxml() But because when I print text.toxml(), the ampersands are being escaped, I get this output: &reg; &amp;reg; My question is, does anybody know of a way that I can force the ampersands not to be escaped in the output, so that I can have my special character reference carry through to the XML document? Basically, for this node, I want print text.toxml() to produce output of &reg; or &#174; in a happy and cooperative way! EDIT 1: By the way, if minidom actually doesn't have this capacity, I am perfectly happy using another module that you can recommend which does. EDIT 2: As Hugh suggested, I tried using data = u'®' (while also using data # -*- coding: utf-8 -*- Python source tags). This almost helped in the sense that it actually caused the ® symbol itself to be outputted to my xml. This is actually not the result I am looking for. As you may have guessed by now (and perhaps I should have specified earlier) this xml document happens to be an HTML page, which needs to work in a browser. So having ® in the document ends up causing rubbish in the browser (® to be precise!). I also tried: data = unichr(174) text.data = data.encode('ascii','xmlcharrefreplace') print text.toxml() But of course this lead to the same origional problem where all that happens is the ampersand gets escaped by .toxml(). My ideal scenario would be some way of escaping the ampersand so that the XML printing function won't "escape" it on my behalf for the document (in other words, achieving my original goal of having &reg; or &#174; appear in the document). Seems like soon I'm going to have to resort to regular expressions! EDIT 2a: Or perhaps not. Seems like getting my html meta information correct <META http-equiv="Content-Type" Content="text/html; charset=UTF-8"> could help, but I'm not sure yet how this fits in with the xml structure...

    Read the article

  • Matrix Multiplication with Threads: Why is it not faster?

    - by prelic
    Hey all, So I've been playing around with pthreads, specifically trying to calculate the product of two matrices. My code is extremely messy because it was just supposed to be a quick little fun project for myself, but the thread theory I used was very similar to: #include <pthread.h> #include <stdio.h> #include <stdlib.h> #define M 3 #define K 2 #define N 3 #define NUM_THREADS 10 int A [M][K] = { {1,4}, {2,5}, {3,6} }; int B [K][N] = { {8,7,6}, {5,4,3} }; int C [M][N]; struct v { int i; /* row */ int j; /* column */ }; void *runner(void *param); /* the thread */ int main(int argc, char *argv[]) { int i,j, count = 0; for(i = 0; i < M; i++) { for(j = 0; j < N; j++) { //Assign a row and column for each thread struct v *data = (struct v *) malloc(sizeof(struct v)); data->i = i; data->j = j; /* Now create the thread passing it data as a parameter */ pthread_t tid; //Thread ID pthread_attr_t attr; //Set of thread attributes //Get the default attributes pthread_attr_init(&attr); //Create the thread pthread_create(&tid,&attr,runner,data); //Make sure the parent waits for all thread to complete pthread_join(tid, NULL); count++; } } //Print out the resulting matrix for(i = 0; i < M; i++) { for(j = 0; j < N; j++) { printf("%d ", C[i][j]); } printf("\n"); } } //The thread will begin control in this function void *runner(void *param) { struct v *data = param; // the structure that holds our data int n, sum = 0; //the counter and sum //Row multiplied by column for(n = 0; n< K; n++){ sum += A[data->i][n] * B[n][data->j]; } //assign the sum to its coordinate C[data->i][data->j] = sum; //Exit the thread pthread_exit(0); } source: http://macboypro.com/blog/2009/06/29/matrix-multiplication-in-c-using-pthreads-on-linux/ For the non-threaded version, I used the same setup (3 2-d matrices, dynamically allocated structs to hold r/c), and added a timer. First trials indicated that the non-threaded version was faster. My first thought was that the dimensions were too small to notice a difference, and it was taking longer to create the threads. So I upped the dimensions to about 50x50, randomly filled, and ran it, and I'm still not seeing any performance upgrade with the threaded version. What am I missing here?

    Read the article

  • How can I hit my database with an AJAX call using javascript?

    - by tmedge
    I am pretty new at this stuff, so bear with me. I am using ASP.NET MVC. I have created an overlay to cover the page when someone clicks a button corresponding to a certain database entry. Because of this, ALL of my code for this functionality is in a .js file contained within my project. What I need to do is pull the info corresponding to my entry from the database itself using an AJAX call, and place that into my textboxes. Then, after the end-user has made the desired changes, I need to update that entry's values to match the input. I've been surfing the web for a while, and have failed to find an example that fits my needs effectively. Here is my code in my javascript file thus far: function editOverlay(picId) { //pull up an overlay $('body').append('<div class="overlay" />'); var $overlayClass = $('.overlay'); $overlayClass.append('<div class="dataModal" />'); var $data = $('.dataModal'); overlaySetup($overlayClass, $data); //set up form $data.append('<h1>Edit Picture</h1><br /><br />'); $data.append('Picture name: &nbsp;'); $data.append('<input class="picName" /> <br /><br /><br />'); $data.append('Relative url: &nbsp;'); $data.append('<input class="picRelURL" /> <br /><br /><br />'); $data.append('Description: &nbsp;'); $data.append('<textarea class="picDescription" /> <br /><br /><br />'); var $nameBox = $('.picName'); var $urlBox = $('.picRelURL'); var $descBox = $('.picDescription'); var pic = null; //this is where I need to pull the actual object from the db //var imgList = for (var temp in imgList) { if (temp.Id == picId) { pic= temp; } } /* $nameBox.attr('value', pic.Name); $urlBox.attr('value', pic.RelativeURL); $descBox.attr('value', pic.Description); */ //close buttons $data.append('<input type="button" value="Save Changes" class="saveButton" />'); $data.append('<input type="button" value="Cancel" class="cancelButton" />'); $('.saveButton').click(function() { /* pic.Name = $nameBox.attr('value'); pic.RelativeURL = $urlBox.attr('value'); pic.Description = $descBox.attr('value'); */ //make a call to my Save() method in my repository CloseOverlay(); }); $('.cancelButton').click(function() { CloseOverlay(); }); } The stuff I have commented out is what I need to accomplish and/or is not available until prior issues are resolved. Any and all advice is appreciated! Remember, I am VERY new to this stuff (two weeks, to be exact) and will probably need highly explicit instructions. BTW: overlaySetup() and CloseOverlay() are functions I have living someplace else. Thanks!

    Read the article

  • Basic C question, concerning memory allocation and value assignment

    - by VHristov
    Hi there, I have recently started working on my master thesis in C that I haven't used in quite a long time. Being used to Java, I'm now facing all kinds of problems all the time. I hope someone can help me with the following one, since I've been struggling with it for the past two days. So I have a really basic model of a database: tables, tuples, attributes and I'm trying to load some data into this structure. Following are the definitions: typedef struct attribute { int type; char * name; void * value; } attribute; typedef struct tuple { int tuple_id; int attribute_count; attribute * attributes; } tuple; typedef struct table { char * name; int row_count; tuple * tuples; } table; Data is coming from a file with inserts (generated for the Wisconsin benchmark), which I'm parsing. I have only integer or string values. A sample row would look like: insert into table values (9205, 541, 1, 1, 5, 5, 5, 5, 0, 1, 9205, 10, 11, 'HHHHHHH', 'HHHHHHH', 'HHHHHHH'); I've "managed" to load and parse the data and also to assign it. However, the assignment bit is buggy, since all values point to the same memory location, i.e. all rows look identical after I've loaded the data. Here is what I do: char value[10]; // assuming no value is longer than 10 chars int i, j, k; table * data = (table*) malloc(sizeof(data)); data->name = "table"; data->row_count = number_of_lines; data->tuples = (tuple*) malloc(number_of_lines*sizeof(tuple)); tuple* current_tuple; for(i=0; i<number_of_lines; i++) { current_tuple = &data->tuples[i]; current_tuple->tuple_id = i; current_tuple->attribute_count = 16; // static in our system current_tuple->attributes = (attribute*) malloc(16*sizeof(attribute)); for(k = 0; k < 16; k++) { current_tuple->attributes[k].name = attribute_names[k]; // for int values: current_tuple->attributes[k].type = DB_ATT_TYPE_INT; // write data into value-field int v = atoi(value); current_tuple->attributes[k].value = &v; // for string values: current_tuple->attributes[k].type = DB_ATT_TYPE_STRING; current_tuple->attributes[k].value = value; } // ... } While I am perfectly aware, why this is not working, I can't figure out how to get it working. I've tried following things, none of which worked: memcpy(current_tuple->attributes[k].value, &v, sizeof(int)); This results in a bad access error. Same for the following code (since I'm not quite sure which one would be the correct usage): memcpy(current_tuple->attributes[k].value, &v, 1); Not even sure if memcpy is what I need here... Also I've tried allocating memory, by doing something like: current_tuple->attributes[k].value = (int *) malloc(sizeof(int)); only to get "malloc: * error for object 0x100108e98: incorrect checksum for freed object - object was probably modified after being freed." As far as I understand this error, memory has already been allocated for this object, but I don't see where this happened. Doesn't the malloc(sizeof(attribute)) only allocate the memory needed to store an integer and two pointers (i.e. not the memory those pointers point to)? Any help would be greatly appreciated! Regards, Vassil

    Read the article

  • Jquery mobile ajax request not working after 4-5 request is made in Android

    - by Coder_sLaY
    I am developing an application using jQuery mobile 1.1.0 RC1 and phonegap 1.5.0 I have a single HTML page which contains all the pages in it as a div(through data-role="page") here is my code <!DOCTYPE HTML> <html> <head> <title>Index Page</title> <!-- Adding viewport --> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1"> <!-- Adding Phonegap scripts --> <script type="text/javascript" charset="utf-8" src="cordova/cordova-1.5.0.js"></script> <!-- Adding jQuery mobile and jQuery scripts & CSS --> <script type="text/javascript" src="jquery/jquery-1.7.1.min.js"></script> <link rel="stylesheet" href="jquerymobile/jquery.mobile-1.1.0-rc.1.min.css" /> <script type="text/javascript" src="jquery/jquery.validate.min.js"></script> <script type="text/javascript" src="jquerymobile/jquery.mobile-1.1.0-rc.1.min.js"></script> <link rel="stylesheet" href="css/colors.css"> <script type="text/javascript"> function page1(){ $.mobile.changePage("#page2", { transition : "slide" }); } function page2(){ $.mobile.changePage("#page1", { transition : "slide" }); } $("#page1").live("pageshow", function(e) { $.ajax({ type : 'GET', cache : false, url : "http://192.168.1.198:9051/something.xml" + "?time=" + Date.now(), data : { key : "value" }, dataType : "xml", success : function(xml) { console.log("Success Page1"); }, error : function(xhr) { } }); }); $("#page2").live("pageshow", function(e) { $.ajax({ type : 'GET', cache : false, url : "http://192.168.1.198:9051/something.xml" + "?time=" + Date.now(), data : { key : "value" }, dataType : "xml", success : function(xml) { console.log("Success Page2"); }, error : function(xhr) { } }); }); </script> <body> <div data-role="page" id="page1"> <div data-role="header">Page 1</div> <div data-role="content"> <input type="text" name="page1GetTime" id="page1GetTime" value="" /><a href="#" data-role="button" onclick="page1()" id="gotopage2"> Go to Page 2 </a> </div> </div> <div data-role="page" id="page2"> <div data-role="header">Page 2</div> <div data-role="content"> <input type="text" name="page2GetTime" id="page2GetTime" value="" /><a href="#" data-role="button" onclick="page2()" id="gotopage1">Go to Page 1</a> </div> </div> </body> Now when i click to "Go to page2" then page2 will be shown along with one ajax request .. If i keep on moving from one page to another then a ajax request is made.. This request stops responding after 4 to 5 request... Why is it happening?

    Read the article

  • error in IIS7 but not on IIS6

    - by Brad
    I have a website that is we are now deploying to windows 2008 servers that has worked in the past on IIS6 without a problem. It is using .net 2 framework. Most of the website works. Just when we create a screen report over a certain size on the server we get this error. Event code: 3005 Event message: An unhandled exception has occurred. Event time: 6/2/2010 10:40:17 AM Event time (UTC): 6/2/2010 3:40:17 PM Event ID: 1b719ad45d444f949ecc9cbc23f49720 Event sequence: 10 Event occurrence: 1 Event detail code: 0 Application information: Application domain: /LM/W3SVC/3/ROOT-1-129199668164927170 Trust level: Full Application Virtual Path: / Application Path: c:\web\PatronAccess\ Machine name: WIN2008DEV Process information: Process ID: 4712 Process name: w3wp.exe Account name: NT AUTHORITY\NETWORK SERVICE Exception information: Exception type: HttpException Exception message: Invalid viewstate. Request information: Request URL: http://win2008dev/WebResource.axd?d=xCXKkHAeSYHWbCg.gif Request path: /WebResource.axd User host address: 172.17.2.66 User: Is authenticated: False Authentication Type: Thread account name: NT AUTHORITY\NETWORK SERVICE Thread information: Thread ID: 6 Thread account name: NT AUTHORITY\NETWORK SERVICE Is impersonating: False Stack trace: at System.Web.UI.Page.DecryptStringWithIV(String s, IVType ivType) at System.Web.Handlers.AssemblyResourceLoader.System.Web.IHttpHandler.ProcessRequest(HttpContext context) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) Custom event details: And this one. A process serving application pool 'PatronAccess' suffered a fatal communication error with the Windows Process Activation Service. The process id was '4596'. The data field contains the error number. I have a debug of the application pool but I don't know where to go from here. * wait with pending attach Symbol search path is: Executable search path is: ModLoad: 00bd0000 00bd8000 c:\windows\system32\inetsrv\w3wp.exe ModLoad: 77380000 774a7000 C:\Windows\system32\ntdll.dll ModLoad: 75cb0000 75d8b000 C:\Windows\system32\kernel32.dll ModLoad: 75b60000 75c26000 C:\Windows\system32\ADVAPI32.dll ModLoad: 75df0000 75eb2000 C:\Windows\system32\RPCRT4.dll ModLoad: 76500000 765aa000 C:\Windows\system32\msvcrt.dll ModLoad: 76250000 762ed000 C:\Windows\system32\USER32.dll ModLoad: 75ae0000 75b2b000 C:\Windows\system32\GDI32.dll ModLoad: 75ec0000 76004000 C:\Windows\system32\ole32.dll ModLoad: 731a0000 731d6000 c:\windows\system32\inetsrv\IISUTIL.dll ModLoad: 75330000 75421000 C:\Windows\system32\CRYPT32.dll ModLoad: 75490000 754a2000 C:\Windows\system32\MSASN1.dll ModLoad: 758e0000 758fe000 C:\Windows\system32\USERENV.dll ModLoad: 758c0000 758d4000 C:\Windows\system32\Secur32.dll ModLoad: 75b30000 75b5d000 C:\Windows\system32\WS2_32.dll ModLoad: 774e0000 774e6000 C:\Windows\system32\NSI.dll ModLoad: 75ac0000 75ade000 C:\Windows\system32\IMM32.DLL ModLoad: 772b0000 77378000 C:\Windows\system32\MSCTF.dll ModLoad: 774f0000 774f9000 C:\Windows\system32\LPK.DLL ModLoad: 75c30000 75cad000 C:\Windows\system32\USP10.dll ModLoad: 74d30000 74d51000 C:\Windows\system32\NTMARTA.DLL ModLoad: 77500000 7754a000 C:\Windows\system32\WLDAP32.dll ModLoad: 75990000 75997000 C:\Windows\system32\PSAPI.DLL ModLoad: 754b0000 754c1000 C:\Windows\system32\SAMLIB.dll ModLoad: 744c0000 744ce000 c:\windows\system32\inetsrv\w3wphost.dll ModLoad: 77550000 775dd000 C:\Windows\system32\OLEAUT32.dll ModLoad: 72ec0000 72f12000 c:\windows\system32\inetsrv\nativerd.dll ModLoad: 742a0000 742cf000 C:\Windows\system32\XmlLite.dll ModLoad: 72e60000 72e90000 c:\windows\system32\inetsrv\IISRES.DLL ModLoad: 74f40000 74f7b000 C:\Windows\system32\rsaenh.dll ModLoad: 72f40000 72f86000 C:\Windows\system32\mscoree.dll ModLoad: 75d90000 75de8000 C:\Windows\system32\SHLWAPI.dll ModLoad: 74600000 7479e000 C:\Windows\WinSxS\x86_microsoft.windows.common-controls_6595b64144ccf1df_6.0.6001.18000_none_5cdbaa5a083979cc\comctl32.dll ModLoad: 72310000 728a0000 C:\Windows\Microsoft.NET\Framework\v2.0.50727\mscorwks.dll ModLoad: 72dc0000 72e5b000 C:\Windows\WinSxS\x86_microsoft.vc80.crt_1fc8b3b9a1e18e3b_8.0.50727.3053_none_d08d7bba442a9b36\MSVCR80.dll ModLoad: 75a30000 75ab4000 C:\Windows\system32\CLBCatQ.DLL ModLoad: 728a0000 728d0000 C:\Windows\system32\mlang.dll ModLoad: 6c7d0000 6c801000 C:\Windows\system32\inetsrv\iiscore.dll ModLoad: 71fd0000 71fd7000 c:\windows\system32\inetsrv\W3TP.dll ModLoad: 74480000 74489000 c:\windows\system32\inetsrv\w3dt.dll ModLoad: 71fb0000 71fbb000 C:\Windows\system32\HTTPAPI.dll ModLoad: 752f0000 7532a000 C:\Windows\system32\slc.dll ModLoad: 6cad0000 6caf8000 C:\Windows\system32\faultrep.dll ModLoad: 75050000 75058000 C:\Windows\system32\VERSION.dll ModLoad: 74b80000 74b8f000 C:\Windows\system32\NLAapi.dll ModLoad: 75290000 752a9000 C:\Windows\system32\IPHLPAPI.DLL ModLoad: 75250000 75285000 C:\Windows\system32\dhcpcsvc.DLL ModLoad: 754d0000 754fc000 C:\Windows\system32\DNSAPI.dll ModLoad: 75240000 75247000 C:\Windows\system32\WINNSI.DLL ModLoad: 75210000 75231000 C:\Windows\system32\dhcpcsvc6.DLL ModLoad: 750b0000 750eb000 C:\Windows\System32\mswsock.dll ModLoad: 73920000 73928000 C:\Windows\System32\winrnr.dll ModLoad: 73720000 7372f000 C:\Windows\system32\napinsp.dll ModLoad: 74d00000 74d05000 C:\Windows\System32\wshtcpip.dll ModLoad: 75140000 75145000 C:\Windows\System32\wship6.dll ModLoad: 73910000 73916000 C:\Windows\system32\rasadhlp.dll ModLoad: 6ca00000 6ca06000 C:\Windows\System32\inetsrv\cachuri.dll ModLoad: 6c9f0000 6c9f8000 C:\Windows\System32\inetsrv\cachfile.dll ModLoad: 6c9e0000 6c9e6000 C:\Windows\System32\inetsrv\cachtokn.dll ModLoad: 6c9d0000 6c9de000 C:\Windows\System32\inetsrv\cachhttp.dll ModLoad: 6c960000 6c96e000 C:\Windows\System32\inetsrv\compstat.dll ModLoad: 6c930000 6c938000 C:\Windows\System32\inetsrv\defdoc.dll ModLoad: 6c910000 6c919000 C:\Windows\System32\inetsrv\dirlist.dll ModLoad: 6c6b0000 6c6b8000 C:\Windows\System32\inetsrv\protsup.dll ModLoad: 6c6a0000 6c6ad000 C:\Windows\System32\inetsrv\static.dll ModLoad: 6c690000 6c69b000 C:\Windows\System32\inetsrv\authanon.dll ModLoad: 6c680000 6c68b000 C:\Windows\System32\inetsrv\authbas.dll ModLoad: 6c630000 6c63e000 C:\Windows\System32\inetsrv\authsspi.dll ModLoad: 755b0000 75625000 C:\Windows\system32\NETAPI32.dll ModLoad: 6c620000 6c62b000 C:\Windows\System32\inetsrv\modrqflt.dll ModLoad: 6c610000 6c61d000 C:\Windows\System32\inetsrv\custerr.dll ModLoad: 6c5c0000 6c5c8000 C:\Windows\System32\inetsrv\loghttp.dll ModLoad: 6c330000 6c337000 C:\Windows\System32\inetsrv\iisreqs.dll ModLoad: 728f0000 728f7000 C:\Windows\system32\WSOCK32.dll ModLoad: 6c1f0000 6c20e000 C:\Windows\System32\inetsrv\isapi.dll ModLoad: 6c000000 6c011000 C:\Windows\System32\inetsrv\filter.dll ModLoad: 6c320000 6c328000 C:\Windows\System32\inetsrv\validcfg.dll ModLoad: 6a2a0000 6a30d000 C:\Windows\Microsoft.NET\Framework\v2.0.50727\webengine.dll ModLoad: 60060000 60067000 C:\Windows\Microsoft.NET\Framework\v2.0.50727\aspnet_filter.dll ModLoad: 6c310000 6c319000 C:\Windows\system32\inetsrv\wbhst_pm.dll ModLoad: 765b0000 770c0000 C:\Windows\system32\shell32.dll ModLoad: 70d10000 71807000 C:\Windows\assembly\NativeImages_v2.0.50727_32\mscorlib\17f572b09facdc5fda9431558eb7a26e\mscorlib.ni.dll ModLoad: 70580000 70d05000 C:\Windows\assembly\NativeImages_v2.0.50727_32\System\52e1ea3c7491e05cda766d7b3ce3d559\System.ni.dll ModLoad: 03990000 044d3000 C:\Windows\assembly\NativeImages_v2.0.50727_32\System.Web\96071d36e4d44ebb31a3b46f08fdc732\System.Web.ni.dll ModLoad: 75770000 757cf000 C:\Windows\system32\sxs.dll ModLoad: 72ac0000 72bb1000 C:\Windows\assembly\NativeImages_v2.0.50727_32\System.Configuration\e6001d416f7c468334934a2c6a41c631\System.Configuration.ni.dll ModLoad: 71890000 71dc6000 C:\Windows\assembly\NativeImages_v2.0.50727_32\System.Xml\7208ffa39630e9b923331f9df0947a12\System.Xml.ni.dll ModLoad: 66580000 667bc000 C:\Windows\assembly\NativeImages_v2.0.50727_32\Microsoft.JScript\1543943b86269c9bebd5cf7a3fe7f55b\Microsoft.JScript.ni.dll ModLoad: 74460000 74468000 C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\048afd31\e1f306b4\App_global.asax.cyzjkxpg.dll ModLoad: 65d20000 65e7c000 C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\048afd31\e1f306b4\assembly\dl3\10097bf6\5f9a08ec_fffcca01\PatronAccess.DLL ModLoad: 72030000 7208b000 C:\Windows\Microsoft.NET\Framework\v2.0.50727\mscorjit.dll ModLoad: 68ab0000 68bca000 C:\Windows\assembly\NativeImages_v2.0.50727_32\System.Web.Extensio#\3b4cb090536bf6b0dfae8cefaeeadb9f\System.Web.Extensions.ni.dll ModLoad: 64020000 64033000 C:\Windows\Microsoft.NET\Framework\v2.0.50727\mscorsec.dll ModLoad: 73c40000 73c6d000 C:\Windows\system32\WINTRUST.dll ModLoad: 774b0000 774d9000 C:\Windows\system32\imagehlp.dll ModLoad: 73690000 73715000 C:\Windows\WinSxS\x86_microsoft.windows.common-controls_6595b64144ccf1df_5.82.6001.18000_none_886786f450a74a05\COMCTL32.dll ModLoad: 75170000 751a5000 C:\Windows\system32\ncrypt.dll ModLoad: 751b0000 751f5000 C:\Windows\system32\BCRYPT.dll ModLoad: 74d90000 74da5000 C:\Windows\system32\GPAPI.dll ModLoad: 73520000 7353b000 C:\Windows\system32\cryptnet.dll ModLoad: 73440000 73446000 C:\Windows\system32\SensApi.dll ModLoad: 73a50000 73a65000 C:\Windows\system32\Cabinet.dll ModLoad: 6ae30000 6ae3a000 C:\Windows\system32\inetsrv\gzip.dll ModLoad: 69e50000 69e6a000 C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\048afd31\e1f306b4\App_Web_kal6czmb.dll ModLoad: 69e10000 69e3c000 C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\048afd31\e1f306b4\App_Web_b1efcjqz.dll ModLoad: 69bd0000 69c26000 C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\048afd31\e1f306b4\assembly\dl3\e8a04837\0093847c_5153ca01\Infragistics2.WebUI.UltraWebTab.v9.2.DLL ModLoad: 5e480000 5e95e000 C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\048afd31\e1f306b4\assembly\dl3\719ff0ee\00c37169_5153ca01\Infragistics2.Web.v9.2.DLL ModLoad: 67c90000 67d1a000 C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\048afd31\e1f306b4\assembly\dl3\ba3b912a\00d19870_5153ca01\Infragistics2.WebUI.Shared.v9.2.DLL ModLoad: 656a0000 6587a000 C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\048afd31\e1f306b4\assembly\dl3\6470a692\14d22a05_ef2ac901\AjaxControlToolkit.DLL ModLoad: 66960000 66ae8000 C:\Windows\assembly\NativeImages_v2.0.50727_32\System.Drawing\6312464f64727a2a50d5ce3fd73ad1bb\System.Drawing.ni.dll ModLoad: 6e690000 6ece3000 C:\Windows\assembly\NativeImages_v2.0.50727_32\System.Data\813556b5a2722045b0ea14467fd00227\System.Data.ni.dll ModLoad: 64e70000 65144000 C:\Windows\assembly\GAC_32\System.Data\2.0.0.0__b77a5c561934e089\System.Data.dll ModLoad: 69c70000 69ca2000 C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\048afd31\e1f306b4\App_Web_zwtn5a73.dll ModLoad: 69e70000 69e8e000 C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\048afd31\e1f306b4\App_Web_qijxg7dv.dll ModLoad: 645a0000 647bf000 C:\Windows\assembly\NativeImages_v2.0.50727_32\System.Web.Mobile\b472cb382c17ffc3cb1a91ce12d90bf1\System.Web.Mobile.ni.dll ModLoad: 69c30000 69c66000 C:\Windows\assembly\NativeImages_v2.0.50727_32\System.Web.RegularE#\e6b57c0506ec849c6706cb5617ad7372\System.Web.RegularExpressions.ni.dll ModLoad: 6c300000 6c30a000 C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\048afd31\e1f306b4\App_Web__hyepzhd.dll ModLoad: 69e00000 69e08000 C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\048afd31\e1f306b4\assembly\dl3\5ef208f7\b68a494a_e840c901\SessionTimeoutControl.DLL ModLoad: 69d50000 69d5c000 C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\048afd31\e1f306b4\assembly\dl3\619d48f7\0f695f01_fdfcca01\AgNetDataPro.DLL ModLoad: 69cd0000 69ce8000 C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\048afd31\e1f306b4\assembly\dl3\dc1703ed\00e1c635_caeaca01\xfnlnet.DLL ModLoad: 73d50000 73efb000 C:\Windows\WinSxS\x86_microsoft.windows.gdiplus_6595b64144ccf1df_1.0.6001.18175_none_9e7bbe54c9c04bca\gdiplus.dll (16cc.14e0): Break instruction exception - code 80000003 (first chance) eax=7ffa6000 ebx=00000000 ecx=00000000 edx=7740d094 esi=00000000 edi=00000000 eip=773c7dfe esp=051ff774 ebp=051ff7a0 iopl=0 nv up ei pl zr na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00000246 ntdll!DbgBreakPoint: 773c7dfe cc int 3 0:021 g (16cc.1454): Access violation - code c0000005 (first chance) First chance exceptions are reported before any exception handling. This exception may be expected and handled. eax=00000000 ebx=00000479 ecx=00000000 edx=019d21f8 esi=019d1f18 edi=019ba74c eip=013849ed esp=0499ea44 ebp=0499f15c iopl=0 nv up ei pl zr na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00010246 013849ed 8b01 mov eax,dword ptr [ecx] ds:0023:00000000=???????? 0:018 g ModLoad: 65890000 65a55000 C:\Windows\assembly\NativeImages_v2.0.50727_32\System.Web.Services\2fa835ce2dcace4fc7c0009f102efc79\System.Web.Services.ni.dll ModLoad: 6f2b0000 6f34d000 C:\Windows\assembly\NativeImages_v2.0.50727_32\System.EnterpriseSe#\ae383808b3f5ee9287358378f9a2cad3\System.EnterpriseServices.ni.dll ModLoad: 10000000 10020000 System.EnterpriseServices.Wrapper.dll ModLoad: 00e50000 00e70000 System.EnterpriseServices.Wrapper.dll ModLoad: 66da0000 66de8000 C:\Windows\assembly\NativeImages_v2.0.50727_32\System.EnterpriseSe#\ae383808b3f5ee9287358378f9a2cad3\System.EnterpriseServices.Wrapper.dll ModLoad: 10000000 10020000 C:\Windows\assembly\GAC_32\System.EnterpriseServices\2.0.0.0__b03f5f7f11d50a3a\System.EnterpriseServices.Wrapper.dll ModLoad: 6ab40000 6ab4c000 image6ab40000 ModLoad: 04950000 0495c000 image04950000 ModLoad: 049a0000 049c0000 image049a0000 ModLoad: 049d0000 049f0000 image049d0000 ModLoad: 049a0000 049c0000 image049a0000 ModLoad: 04a40000 04a60000 image04a40000 ModLoad: 049a0000 049c0000 image049a0000 ModLoad: 04a40000 04a60000 image04a40000 ModLoad: 049a0000 049c0000 C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\048afd31\e1f306b4\assembly\dl3\da3b70a0\00e9280f_c1f4c201\ICSharpCode.SharpZipLib.DLL ModLoad: 5eb40000 5f01e000 Infragistics2.Web.v9.2.dll ModLoad: 05a00000 05ede000 Infragistics2.Web.v9.2.dll ModLoad: 694d0000 694fa000 image694d0000 ModLoad: 049d0000 049fa000 image049d0000 ModLoad: 68cc0000 68cea000 image68cc0000 ModLoad: 04e40000 04e6a000 image04e40000 ModLoad: 69470000 6949a000 image69470000 ModLoad: 04e40000 04e6a000 image04e40000 ModLoad: 69470000 6949a000 C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\048afd31\e1f306b4\assembly\dl3\f77351ae\00582c74_5153ca01\Infragistics2.WebUI.Misc.v9.2.DLL ModLoad: 67d20000 67daa000 image67d20000 ModLoad: 04e70000 04efa000 image04e70000 ModLoad: 643e0000 64598000 Infragistics2.WebUI.UltraWebChart.v9.2.dll ModLoad: 05a00000 05bb8000 Infragistics2.WebUI.UltraWebChart.v9.2.dll ModLoad: 63ac0000 63c78000 Infragistics2.WebUI.UltraWebChart.v9.2.dll ModLoad: 05bc0000 05d78000 Infragistics2.WebUI.UltraWebChart.v9.2.dll ModLoad: 63900000 63ab8000 Infragistics2.WebUI.UltraWebChart.v9.2.dll ModLoad: 05bc0000 05d78000 Infragistics2.WebUI.UltraWebChart.v9.2.dll ModLoad: 63900000 63ab8000 C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\048afd31\e1f306b4\assembly\dl3\9acf477c\0030eeb6_5153ca01\Infragistics2.WebUI.UltraWebChart.v9.2.DLL ModLoad: 60570000 607b6000 image60570000 ModLoad: 05d80000 05fc6000 image05d80000 ModLoad: 64350000 64596000 image64350000 ModLoad: 05fd0000 06216000 image05fd0000 ModLoad: 5edd0000 5f016000 image5edd0000 ModLoad: 05fd0000 06216000 image05fd0000 ModLoad: 5edd0000 5f016000 C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\048afd31\e1f306b4\assembly\dl3\30e4a2ff\00dfbf77_5153ca01\Infragistics2.WebUI.UltraWebGrid.v9.2.DLL ModLoad: 67d50000 67da6000 image67d50000 ModLoad: 04e70000 04ec6000 image04e70000 ModLoad: 68cb0000 68ce4000 image68cb0000 ModLoad: 04e70000 04ea4000 image04e70000 ModLoad: 68790000 687c4000 image68790000 ModLoad: 04eb0000 04ee4000 image04eb0000 ModLoad: 688f0000 68924000 image688f0000 ModLoad: 04eb0000 04ee4000 image04eb0000 ModLoad: 688f0000 68924000 C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\048afd31\e1f306b4\assembly\dl3\2420cb22\00a1ab83_5153ca01\Infragistics2.WebUI.WebCombo.v9.2.DLL ModLoad: 66d50000 66da0000 image66d50000 ModLoad: 04f80000 04fd0000 image04f80000 ModLoad: 67d60000 67db0000 image67d60000 ModLoad: 05a00000 05a50000 image05a00000 ModLoad: 66d00000 66d50000 image66d00000 ModLoad: 05a00000 05a50000 image05a00000 ModLoad: 66d00000 66d50000 C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\048afd31\e1f306b4\assembly\dl3\6ceab935\00b28e76_5153ca01\Infragistics2.WebUI.WebDataInput.v9.2.DLL ModLoad: 11000000 1112e000 image11000000 ModLoad: 05a50000 05b7e000 image05a50000 ModLoad: 11000000 1112e000 image11000000 ModLoad: 05d80000 05eae000 image05d80000 ModLoad: 11000000 1112e000 image11000000 ModLoad: 05d80000 05eae000 image05d80000 ModLoad: 11000000 1112e000 C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\048afd31\e1f306b4\assembly\dl3\e99fdd05\00c79c09_d868c301\itextsharp.DLL ModLoad: 04df0000 04dfe000 LinkPointAPI-cs.dll ModLoad: 04e70000 04e7e000 LinkPointAPI-cs.dll ModLoad: 04df0000 04dfe000 LinkPointAPI-cs.dll ModLoad: 04e80000 04e8e000 LinkPointAPI-cs.dll ModLoad: 04df0000 04dfe000 LinkPointAPI-cs.dll ModLoad: 04e80000 04e8e000 LinkPointAPI-cs.dll ModLoad: 04df0000 04dfe000 C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\048afd31\e1f306b4\assembly\dl3\0e724536\00922343_54dfc701\LinkPointAPI-cs.DLL ModLoad: 04e70000 04e78000 image04e70000 ModLoad: 04e90000 04e98000 image04e90000 ModLoad: 04e70000 04e78000 image04e70000 ModLoad: 04ea0000 04ea8000 image04ea0000 ModLoad: 04e70000 04e78000 image04e70000 ModLoad: 04ea0000 04ea8000 image04ea0000 ModLoad: 04e70000 04e78000 C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\048afd31\e1f306b4\assembly\dl3\859797c4\00eb5fc5_bed8c401\LinkPointTransaction.DLL ModLoad: 65e80000 65fdc000 PatronAccess.dll ModLoad: 05a50000 05bac000 PatronAccess.dll ModLoad: 6ab40000 6ab48000 SessionTimeoutControl.dll ModLoad: 04e90000 04e98000 SessionTimeoutControl.dll ModLoad: 6ab80000 6ab8e000 WebServices.dll ModLoad: 04e90000 04e9e000 WebServices.dll ModLoad: 6ab40000 6ab4e000 WebServices.dll ModLoad: 04ef0000 04efe000 WebServices.dll ModLoad: 69d40000 69d4e000 WebServices.dll ModLoad: 04ef0000 04efe000 WebServices.dll ModLoad: 69d40000 69d4e000 C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\048afd31\e1f306b4\assembly\dl3\21555aa5\5f498093_fefcca01\WebServices.DLL ModLoad: 694e0000 694f8000 image694e0000 ModLoad: 04f80000 04f98000 image04f80000 ModLoad: 661c0000 6624e000 System.ServiceModel.Web.dll ModLoad: 05a50000 05ade000 System.ServiceModel.Web.dll ModLoad: 5d850000 5ddfc000 System.ServiceModel.dll ModLoad: 06220000 067cc000 System.ServiceModel.dll ModLoad: 65ef0000 65fe0000 System.Runtime.Serialization.dll ModLoad: 05eb0000 05fa0000 System.Runtime.Serialization.dll ModLoad: 694e0000 694fe000 SMDiagnostics.dll ModLoad: 04f80000 04f9e000 SMDiagnostics.dll ModLoad: 65be0000 65d1c000 System.Web.Extensions.dll ModLoad: 067d0000 0690c000 System.Web.Extensions.dll ModLoad: 67d40000 67dac000 System.IdentityModel.dll ModLoad: 05ae0000 05b4c000 System.IdentityModel.dll ModLoad: 687a0000 687c2000 System.IdentityModel.Selectors.dll ModLoad: 04fa0000 04fc2000 System.IdentityModel.Selectors.dll ModLoad: 66c90000 66cf4000 Microsoft.Transactions.Bridge.dll ModLoad: 05b50000 05bb4000 Microsoft.Transactions.Bridge.dll ModLoad: 69130000 69146000 System.Web.Abstractions.dll ModLoad: 051b0000 051c6000 System.Web.Abstractions.dll ModLoad: 65150000 651f6000 System.Core.dll ModLoad: 06910000 069b6000 System.Core.dll ModLoad: 64440000 644ea000 System.Data.Linq.dll ModLoad: 069c0000 06a6a000 System.Data.Linq.dll ModLoad: 66d50000 66d9c000 System.Data.Services.Client.dll ModLoad: 06a70000 06abc000 System.Data.Services.Client.dll ModLoad: 68cd0000 68cf0000 System.Data.Services.Design.dll ModLoad: 05210000 05230000 System.Data.Services.Design.dll ModLoad: 5eb00000 5edc2000 System.Data.Entity.dll ModLoad: 06ac0000 06d82000 System.Data.Entity.dll ModLoad: 66af0000 66b16000 System.Xml.Linq.dll ModLoad: 05fa0000 05fc6000 System.Xml.Linq.dll ModLoad: 661c0000 6624e000 C:\Windows\assembly\GAC_MSIL\System.ServiceModel.Web\3.5.0.0__31bf3856ad364e35\System.ServiceModel.Web.dll ModLoad: 64520000 6459e000 System.WorkflowServices.dll ModLoad: 06d90000 06e0e000 System.WorkflowServices.dll ModLoad: 63af0000 63c80000 System.Workflow.ComponentModel.dll ModLoad: 06e10000 06fa0000 System.Workflow.ComponentModel.dll ModLoad: 64320000 6443a000 System.Workflow.Activities.dll ModLoad: 06fa0000 070ba000 System.Workflow.Activities.dll ModLoad: 62cf0000 62d78000 System.Workflow.Runtime.dll ModLoad: 070c0000 07148000 System.Workflow.Runtime.dll ModLoad: 68cb0000 68cc6000 Microsoft.Build.Utilities.dll ModLoad: 07150000 07166000 Microsoft.Build.Utilities.dll ModLoad: 6ab80000 6ab8c000 Microsoft.Build.Framework.dll ModLoad: 05230000 0523c000 Microsoft.Build.Framework.dll ModLoad: 07170000 07214000 Microsoft.Build.Tasks.dll ModLoad: 07220000 072c4000 Microsoft.Build.Tasks.dll ModLoad: 64520000 6459e000 C:\Windows\assembly\GAC_MSIL\System.WorkflowServices\3.5.0.0__31bf3856ad364e35\System.WorkflowServices.dll ModLoad: 5d610000 5d84e000 C:\Windows\assembly\NativeImages_v2.0.50727_32\System.Runtime.Seri#\a33b3b88fd575b703ba4212c677880ae\System.Runtime.Serialization.ni.dll ModLoad: 605a0000 606a6000 C:\Windows\assembly\NativeImages_v2.0.50727_32\System.IdentityModel\3bfbe737873becead614d1504e7d5684\System.IdentityModel.ni.dll ModLoad: 5ab70000 5bbf7000 C:\Windows\assembly\NativeImages_v2.0.50727_32\System.ServiceModel\7115815b53ec561932345e16fbeea968\System.ServiceModel.ni.dll ModLoad: 61440000 6201e000 C:\Windows\assembly\NativeImages_v2.0.50727_32\System.Windows.Forms\1941d7639299344ae28fb6b23da65247\System.Windows.Forms.ni.dll ModLoad: 5d190000 5d3c4000 C:\Windows\assembly\NativeImages_v2.0.50727_32\System.Core\a0522cb280c09b3441e1889502ca145a\System.Core.ni.dll ModLoad: 60a00000 61433000 C:\Windows\assembly\NativeImages_v2.0.50727_32\System.Design\d3fa02f8a34329c8b84c004afaea7054\System.Design.ni.dll (16cc.1454): CLR exception - code e0434f4d (first chance) (16cc.1454): Access violation - code c0000005 (first chance) First chance exceptions are reported before any exception handling. This exception may be expected and handled. eax=00000000 ebx=01776038 ecx=00000000 edx=00000000 esi=017ff314 edi=018907f8 eip=071a62fc esp=0499ee88 ebp=0499eef4 iopl=0 nv up ei pl zr na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00010246 071a62fc 8b01 mov eax,dword ptr [ecx] ds:0023:00000000=???????? 0:018 g (16cc.1454): CLR exception - code e0434f4d (first chance) (16cc.1454): Access violation - code c0000005 (first chance) First chance exceptions are reported before any exception handling. This exception may be expected and handled. eax=00000000 ebx=01776038 ecx=00000000 edx=00000000 esi=017ff200 edi=0186ed04 eip=071a62fc esp=0499ee88 ebp=0499eef4 iopl=0 nv up ei pl zr na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00010246 071a62fc 8b01 mov eax,dword ptr [ecx] ds:0023:00000000=???????? 0:018 g (16cc.1358): Access violation - code c0000005 (first chance) First chance exceptions are reported before any exception handling. This exception may be expected and handled. eax=00000000 ebx=01776038 ecx=00000000 edx=00000000 esi=017ff200 edi=01858380 eip=071a62fc esp=0742ee98 ebp=0742ef04 iopl=0 nv up ei pl zr na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00010246 071a62fc 8b01 mov eax,dword ptr [ecx] ds:0023:00000000=???????? 0:020 g (16cc.1358): Access violation - code c0000005 (first chance) First chance exceptions are reported before any exception handling. This exception may be expected and handled. eax=00000000 ebx=017758a4 ecx=00000000 edx=00000000 esi=017fd078 edi=018b6afc eip=071a62fc esp=0742ee98 ebp=0742ef04 iopl=0 nv up ei pl zr na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00010246 071a62fc 8b01 mov eax,dword ptr [ecx] ds:0023:00000000=???????? 0:020 g (16cc.1358): Stack overflow - code c00000fd (first chance) First chance exceptions are reported before any exception handling. This exception may be expected and handled. eax=00000000 ebx=020504b4 ecx=000001d1 edx=0000001b esi=020503d4 edi=073f2998 eip=6eaf0ed3 esp=073f2980 ebp=073f30ec iopl=0 nv up ei pl zr na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00010246 * WARNING: Unable to verify checksum for C:\Windows\assembly\NativeImages_v2.0.50727_32\System.Data\813556b5a2722045b0ea14467fd00227\System.Data.ni.dll System_Data_ni!_bidW103 (System_Data_ni+0x460ed3): 6eaf0ed3 f3ab rep stos dword ptr es:[edi] Any help would be appricated.

    Read the article

< Previous Page | 675 676 677 678 679 680 681 682 683 684 685 686  | Next Page >