Search Results

Search found 46833 results on 1874 pages for 'distributed system'.

Page 1800/1874 | < Previous Page | 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807  | Next Page >

  • UppercuT &ndash; Custom Extensions Now With PowerShell and Ruby

    - by Robz / Fervent Coder
    Arguably, one of the most powerful features of UppercuT (UC) is the ability to extend any step of the build process with a pre, post, or replace hook. This customization is done in a separate location from the build so you can upgrade without wondering if you broke the build. There is a hook before each step of the build has run. There is a hook after. And back to power again, there is a replacement hook. If you don’t like what the step is doing and/or you want to replace it’s entire functionality, you just drop a custom replacement extension and UppercuT will perform the custom step instead. Up until recently all custom hooks had to be written in NAnt. Now they are a little sweeter because you no longer need to use NAnt to extend UC if you don’t want to. You can use PowerShell. Or Ruby.   Let that sink in for a moment. You don’t have to even need to interact with NAnt at all now. Extension Points On the wiki, all of the extension points are shown. The basic idea is that you would put whatever customization you are doing in a separate folder named build.custom. Each step Let’s take a look at all we can customize: The start point is default.build. It calls build.custom/default.pre.build if it exists, then it runs build/default.build (normal tasks) OR build.custom/default.replace.build if it exists, and finally build.custom/default.post.build if it exists. Every step below runs with the same extension points but changes on the file name it is looking for. NOTE: If you include default.replace.build, nothing else will run because everything is called from default.build.    * policyChecks.step    * versionBuilder.step NOTE: If you include build.custom/versionBuilder.replace.step, the items below will not run.      - svn.step, tfs.step, or git.step (the custom tasks for these need to go in build.custom/versioners)    * generateBuildInfo.step    * compile.step    * environmentBuilder.step    * analyze.step NOTE: If you include build.custom/analyze.replace.step, the items below will not run.      - test.step (the custom tasks for this need to go in build.custom/analyzers) NOTE: If you include build.custom/analyzers/test.replace.step, the items below will not run.        + mbunit2.step, gallio.step, or nunit.step (the custom tasks for these need to go in build.custom/analyzers)      - ncover.step (the custom tasks for this need to go in build.custom/analyzers)      - ndepend.step (the custom tasks for this need to go in build.custom/analyzers)      - moma.step (the custom tasks for this need to go in build.custom/analyzers)    * package.step NOTE: If you include build.custom/package.replace.step, the items below will not run.      - deploymentBuilder.step Customize UppercuT Builds With PowerShell UppercuT can now be extended with PowerShell (PS). To customize any extension point with PS, just add .ps1 to the end of the file name and write your custom tasks in PowerShell. If you are not signing your scripts you will need to change a setting in the UppercuT.config file. This does impose a security risk, because this allows PS to now run any PS script. This setting stays that way on ANY machine that runs the build until manually changed by someone. I’m not responsible if you mess up your machine or anyone else’s by doing this. You’ve been warned. Now that you are fully aware of any security holes you may open and are okay with that, let’s move on. Let’s create a file called default.replace.build.ps1 in the build.custom folder. Open that file in notepad and let’s add this to it: write-host "hello - I'm a custom task written in Powershell!" Now, let’s run build.bat. You could get some PSake action going here. I won’t dive into that in this post though. Customize UppercuT Builds With Ruby If you want to customize any extension point with Ruby, just add .rb to the end of the file name and write your custom tasks in Ruby.  Let’s write a custom ruby task for UC. If you were thinking it would be the same as the one we just wrote for PS, you’d be right! In the build.custom folder, lets create a file called default.replace.build.rb. Open that file in notepad and let’s put this in there: puts "I'm a custom ruby task!" Now, let’s run build.bat again. That’s chunky bacon. UppercuT and Albacore.NET Just for fun, I wanted to see if I could replace the compile.step with a Rake task. Not just any rake task, Albacore’s msbuild task. Albacore is a suite of rake tasks brought about by Derick Bailey to make building .NET with Rake easier. It has quite a bit of support with developers that are using Rake to build code. In my build.custom folder, I drop a compile.replace.step.rb. I also put in a separate file that will contain my Albacore rake task and I call that compile.rb. What are the contents of compile.replace.step.rb? rake = 'rake' arguments= '-f ' + Dir.pwd + '/../build.custom/compile.rb' #puts "Calling #{rake} " + arguments system("#{rake} " + arguments) Since the custom extensions call ruby, we have to shell back out and call rake. That’s what we are doing here. We also realize that ruby is called from the build folder, so we need to back out and dive into the build.custom folder to find the file that is technically next to us. What are the contents of compile.rb? require 'rubygems' require 'fileutils' require 'albacore' task :default => [:compile] puts "Using Ruby to compile UppercuT with Albacore Tasks" desc 'Compile the source' msbuild :compile do |msb| msb.properties = { :configuration => :Release, :outputpath => '../../build_output/UppercuT' } msb.targets [:clean, :build] msb.verbosity = "quiet" msb.path_to_command = 'c:/Windows/Microsoft.NET/Framework/v3.5/MSBuild.exe' msb.solution = '../uppercut.sln' end We are using the msbuild task here. We change the output path to the build_output/UppercuT folder. The output path has “../../” because this is based on every project. We could grab the current directory and then point the task specifically to a folder if we have projects that are at different levels. We want the verbosity to be quiet so we set that as well. So what kind of output do you get for this? Let’s run build.bat custom_tasks_replace:      [echo] Running custom tasks instead of normal tasks if C:\code\uppercut\build\..\build.custom\compile.replace.step exists.      [exec] (in C:/code/uppercut/build)      [exec] Using Ruby to compile UppercuT with Albacore Tasks      [exec] Microsoft (R) Build Engine Version 3.5.30729.4926      [exec] [Microsoft .NET Framework, Version 2.0.50727.4927]      [exec] Copyright (C) Microsoft Corporation 2007. All rights reserved. If you think this is awesome, you’d be right!   With this knowledge you shall build.

    Read the article

  • Creating a podcast feed for iTunes & BlackBerry users using WCF Syndication

    - by brian_ritchie
     In my previous post, I showed how to create a RSS feed using WCF Syndication.  Next, I'll show how to add the additional tags needed to turn a RSS feed into an iTunes podcast.   A podcast is merely a RSS feed with some special characteristics: iTunes RSS tags.  These are additional tags beyond the standard RSS spec.  Apple has a good page on the requirements. Audio file enclosure.  This is a link to the audio file (such as mp3) hosted by your site.  Apple doesn't host the audio, they just read the meta-data from the RSS feed into their system. The SyndicationFeed class supports both AttributeExtensions & ElementExtensions to add custom tags to the RSS feeds. A couple of points of interest in the code below: The imageUrl below provides the album cover for iTunes (170px × 170px) Each SyndicationItem corresponds to an audio episode in your podcast So, here's the code: .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: XNamespace itunesNS = "http://www.itunes.com/dtds/podcast-1.0.dtd"; 2: string prefix = "itunes"; 3:   4: var feed = new SyndicationFeed(title, description, new Uri(link)); 5: feed.Categories.Add(new SyndicationCategory(category)); 6: feed.AttributeExtensions.Add(new XmlQualifiedName(prefix, 7: "http://www.w3.org/2000/xmlns/"), itunesNS.NamespaceName); 8: feed.Copyright = new TextSyndicationContent(copyright); 9: feed.Language = "en-us"; 10: feed.Copyright = new TextSyndicationContent(DateTime.Now.Year + " " + ownerName); 11: feed.ImageUrl = new Uri(imageUrl); 12: feed.LastUpdatedTime = DateTime.Now; 13: feed.Authors.Add(new SyndicationPerson() {Name=ownerName, Email=ownerEmail }); 14: var extensions = feed.ElementExtensions; 15: extensions.Add(new XElement(itunesNS + "subtitle", subTitle).CreateReader()); 16: extensions.Add(new XElement(itunesNS + "image", 17: new XAttribute("href", imageUrl)).CreateReader()); 18: extensions.Add(new XElement(itunesNS + "author", ownerName).CreateReader()); 19: extensions.Add(new XElement(itunesNS + "summary", description).CreateReader()); 20: extensions.Add(new XElement(itunesNS + "category", 21: new XAttribute("text", category), 22: new XElement(itunesNS + "category", 23: new XAttribute("text", subCategory))).CreateReader()); 24: extensions.Add(new XElement(itunesNS + "explicit", "no").CreateReader()); 25: extensions.Add(new XDocument( 26: new XElement(itunesNS + "owner", 27: new XElement(itunesNS + "name", ownerName), 28: new XElement(itunesNS + "email", ownerEmail))).CreateReader()); 29:   30: var feedItems = new List<SyndicationItem>(); 31: foreach (var i in Items) 32: { 33: var item = new SyndicationItem(i.title, null, new Uri(link)); 34: item.Summary = new TextSyndicationContent(i.summary); 35: item.Id = i.id; 36: if (i.publishedDate != null) 37: item.PublishDate = (DateTimeOffset)i.publishedDate; 38: item.Links.Add(new SyndicationLink() { 39: Title = i.title, Uri = new Uri(link), 40: Length = i.size, MediaType = i.mediaType }); 41: var itemExt = item.ElementExtensions; 42: itemExt.Add(new XElement(itunesNS + "subtitle", i.subTitle).CreateReader()); 43: itemExt.Add(new XElement(itunesNS + "summary", i.summary).CreateReader()); 44: itemExt.Add(new XElement(itunesNS + "duration", 45: string.Format("{0}:{1:00}:{2:00}", 46: i.duration.Hours, i.duration.Minutes, i.duration.Seconds) 47: ).CreateReader()); 48: itemExt.Add(new XElement(itunesNS + "keywords", i.keywords).CreateReader()); 49: itemExt.Add(new XElement(itunesNS + "explicit", "no").CreateReader()); 50: itemExt.Add(new XElement("enclosure", new XAttribute("url", i.url), 51: new XAttribute("length", i.size), new XAttribute("type", i.mediaType))); 52: feedItems.Add(item); 53: } 54:   55: feed.Items = feedItems; If you're hosting your podcast feed within a MVC project, you can use the code from my previous post to stream it. Once you have created your feed, you can use the Feed Validator tool to make sure it is up to spec.  Or you can use iTunes: Launch iTunes. In the Advanced menu, select Subscribe to Podcast. Enter your feed URL in the text box and click OK. After you've verified your feed is solid & good to go, you can submit it to iTunes.  Launch iTunes. In the left navigation column, click on iTunes Store to open the store. Once the store loads, click on Podcasts along the top navigation bar to go to the Podcasts page. In the right column of the Podcasts page, click on the Submit a Podcast link. Follow the instructions on the Submit a Podcast page. Here are the full instructions.  Once they have approved your podcast, it will be available within iTunes. RIM has also gotten into the podcasting business...which is great for BlackBerry users.  They accept the same enhanced-RSS feed that iTunes uses, so just create an account with them & submit the feed's URL.  It goes through a similar approval process to iTunes.  BlackBerry users must be on BlackBerry 6 OS or download the Podcast App from App World. In my next post, I'll show how to build the podcast feed dynamically from the ID3 tags within the MP3 files.

    Read the article

  • How to Connect Crystal Reports to MySQL directly by C# code without DSN or a DataSet

    - by Yanko Hernández Alvarez
    How can I connect a Crystal Report (VS 2008 basic) to a MySQL DB without using a DSN or a preload DataSet using C#? I need install the program on several places, so I must change the connection parameters. I don't want to create a DSN on every place, nor do I want to preload a DataSet and pass it to the report engine. I use nhibernate to access the database, so to create and fill the additional DS would take twice the work and additional maintenance later. I think the best option would be to let the crystal reports engine to connect to MySQL server by itself using ODBC. I managed to create the connection in the report designer (VS2008) using the Database Expert, creating an ODBC(RDO) connection and entering this connection string "DRIVER={MySQL ODBC 5.1 Driver};SERVER=myserver.mydomain" and in the "Next" page filling the "User ID", "Password" and "Database" parameters. I didn't fill the "Server" parameter. It worked. As a matter of fact, if you use the former connection string, it doesn't matter what you put on the "Server" parameter, it seems the parameter is unused. On the other hand, if you use "DRIVER={MySQL ODBC 5.1 Driver}" as a connection string and later fill the "Server" parameter with the FQDN of the server, the connection doesn't work. How can I do that by code? All the examples I've seen till now, use a DSN or the DataSet method. I saw the same question posted but for PostgreSQL and tried to adapt it to mysql, but so far, no success. The first method: Rp.Load(); Rp.DataSourceConnections[0].SetConnection("DRIVER={MySQL ODBC 5.1 Driver};SERVER=myserver.mydomain", "database", "user", "pass"); Rp.ExportToDisk(ExportFormatType.PortableDocFormat, "report.pdf"); raise an CrystalDecisions.CrystalReports.Engine.LogOnException during ExportToDisk Message="Logon failed.\nDetails: IM002:[Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified.\rError in File temporal file path.rpt:\nUnable to connect: incorrect log on parameters. the InnerException is an System.Runtime.InteropServices.COMException with the same message and no InnerException The "no default driver specified" makes me wonder if the server parameter is unused here too (see above). In that case: How can I specify the connection string? I haven't tried the second method because it doesn't apply. Does anybody know the solution?

    Read the article

  • CodePlex Daily Summary for Sunday, May 16, 2010

    CodePlex Daily Summary for Sunday, May 16, 2010New Projects3D Calculator: 3D Calc is a simple calculator application for Windows Phone 7, the purpose of this project is to demo the 3D animations capabilities of WP7 and sh...azaleas: AzaleasBlueset Studio Opensource Projects: Only for Opensource projects form Blueset Studio.Breck: A Phoenix and Jumper Moneky Production: Breck is a first person non-violent shooter developed in C++ and Dark GDK. After the main game is developed we are looking into making a sequel or...Discuz! Forum SDK: This project is use to login in and post or reply topic on discuz forum.Dominion.NET: Evolving Dominion source code originally written in VB6 and posted by "jatill" on Collectible Card Game Headquarters. Migration of the design and s...EkspSys2010-ITR: A mini project for the course Experimental System devolopment in spring 2010Facebook Graph Toolkit: This project is a .Net implementation of the Facebook Graph API. The aim of this project is to be a replacement to the existing Facebook Toolkit (h...iFree: This is a solution for Vietnamese network socialInfoPath Editor for Developer: InfoPath Editor for developer allows user to modify the html text directly inside InfoPath designer or filler and push the change back to InfoPath ...iZeit: Run your own online calendar, with blog integration, recurrence, todo list and categories.machgos dotNet Tests: Just some little test-projects for learningmim: TBAMinePost: MinePost is a game made for the first 48 hour Reddit Game Jam.Mockina: Mockina is a mock framework. Expression tree syntax is used to specify which members to mock, both public and non-public. The code is easy to under...MSBuild Launch Pad (mPad): This is just another shell extension for MSBuild to enable quick execution of MSBuild scripts via Windows Explorer context menu. (C) 2010 Lex LiPeacock: A browser like tabbed applicationPrimeCalculation: PrimeCalculation is a .NET app to calculate primes in a given range. Speed on Core2Duo 2,4GHZ: Found all primes from 0 to 1 billion in 35 seconds (...Slightly Silverlight: A Framework that leverages Silverlight for processing, business logic but standard HTML for the presentation layer.Stopwatch: Stopwatch is a tool for measuring the time. To start and pause stopwatch you only need to press a key on the keyboard. An additional context menu a...YAXLib: Yet Another XML Serialization Library for the .NET Framework: YAXLib is an XML Serialization library which helps you structure freely the XML result, choose among private and public fields to be serialized, an...New ReleasesActivate Your Glutes: v1.0.3.0: This release is a migration to VS2010, .Net 4, MVC2 and Entity Framework 4. The code has also been considerably cleaned up - taking advantage of E...AnyCAD: AnyCAD.Free.ENU.v1.1: http://www.anycad.net Modeling •2D: Line, Rectangle, Arc, Arch, Circle, Spline, Polygon •Feature: Extrude, Loft, Chamfer, Sweep, Revol •Boolean: ...Blueset Studio Opensource Projects: 多功能计算器 3.5: 稳定版本。Code for Rapid C# Windows Development eBook: LLBLGen LINQPad Data Context Driver Ver 1.0.0.0: First release of a Static LLBLGen Pro Data Context Driver for LINQPad I recommend LINQPad 4 as it seems more stable with this driver than LINQPad 2.DSQLT - Dynamic SQL Templates: Release 1.2. Some behaviour has changed!!: Attention. Some behaviour has changed! Now its necessary to use WildCards in the pattern-parameter for DSQLT.AllSourceContains DSQLT.Databases DSQ...FDS AutoCAD plug-in: FDS to AutoCAD plug-in: Basic functionality was implemented. Some routines like setting fds executable location are still not automated.Feature Builder Guidance Extensions: FBGX 2 - Standalone FX: Background: The Feature Builder Guidance is extensible and displays guidance content supplied by all the Feature Builder Guidance Extensions (FBGX...Floe IRC Client: Floe IRC Client 2010-05 R3: - You can now right click on the input box to get options for toggling bold, underline, colors, etc. - The size of the nickname column is now saved...Floe IRC Client: Floe IRC Client 2010-05 R4: - A user's channel status now appears next to their nick when they talk (e.g. @Nick or +Nick) - Fixed an error where certain kinds of network probl...HD-Trailers.NET Downloader: HD-Trailers.NET Downloader v1.0: Version 1.0 Thanks to Wolfgang for all his help. I let this project languish for too long while focusing on other things, but his involvement has ...InfoPath Editor for Developer: InfoPath Editor Beta 1: Intial Release: Can load InfoPath inner html. Can edit InfoPath inner html. InfoPath 2007 only.LinkSharp: LinkSharp 0.1.0: First release of LinkSharp. Set up iis, and use the sql script to create a new database.PowerAuras: PowerAuras V3.0.0F: This version adds better integration with GTFO New Flags Added PvP flag In 5-Man Instance In Raid Instance In Battleground In ArenaRx Contrib: V1.4: Add the ability to catch internal exception and the ability to publish error by queue adaptersSEO SiteMap: SEO SiteMap RC1: -SevenZipLib Library: v9.13.2: Stable release associated with 7z.dll 9.13 beta. Ability to create and update archives not implemente yet.Silverlight / WPF Controls: Upload, FlipPanel, DeepZoom, Animation, Encryption: Code Camp Demonstration: This code example demonstrates MVVM/MEF with WPF with attached properties,security and custom ICommand class.SQL Data Capture - Black Box Application Testing: SQLDataCapture V1.2: Added Entity Framework Support to CRUD generator (Insert Stored Procedure) and switched to VS 2010 for development.Stopwatch: Stopwatch 0.1: Stopwatch Release 0.1VCC: Latest build, v2.1.30515.0: Automatic drop of latest buildYet another developer blog - Examples: Asynchronous TreeView in ASP.NET MVC: This sample application shows how to use jQuery TreeView plugin for creating an asynchronous TreeView in ASP.NET MVC. This application is accompani...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active Projectspatterns & practices – Enterprise LibraryRawrPHPExcelBlogEngine.NETMicrosoft Biology FoundationCustomer Portal Accelerator for Microsoft Dynamics CRMWindows Azure Command-line Tools for PHP DevelopersMirror Testing SystemN2 CMSStyleCop

    Read the article

  • Asynchronous readback from opengl front buffer using multiple PBO's

    - by KillianDS
    I am developing an application that needs to read back the whole frame from the front buffer of an openGL application. I can hijack the application's opengl library and insert my code on swapbuffers. At the moment I am successfully using a simple but excruciating slow glReadPixels command without PBO's. Now I read about using multiple PBO's to speed things up. While I think I've found enough resources to actually program that (isn't that hard), I have some operational questions left. I would do something like this: create a series (e.g. 3) of PBO's use glReadPixels in my swapBuffers override to read data from front buffer to a PBO (should be fast and non-blocking, right?) Create a seperate thread to call glMapBufferARB, once per PBO after a glReadPixels, because this will block until the pixels are in client memory. Process the data from step 3. Now my main concern is of course in steps 2 and 3. I read about glReadPixels used on PBO's being non-blocking, will this be an issue if I issue new opengl commands after that very fast? Will those opengl commands block? Or will they continue (my guess), and if so, I guess only swapbuffers can be a problem, will this one stall or will glReadPixels from front buffer be many times faster than swapping (about each 15-30ms) or, worst case scenario, will swapbuffers be executed while glReadPixels is still reading data to the PBO? My current guess is this logic will do something like this: copy FRONT_BUFFER - generic place in VRAM, copy VRAM-RAM. But I have no idea which of those 2 is the real bottleneck and more, what the influence on the normal opengl command stream is. Then in step 3. Is it wise to do this asynchronously in a thread separated from normal opengl logic? At the moment I think not, It seems you have to restore buffer operations to normal after doing this and I can't install synchronization objects in the original code to temporarily block those. So I think my best option is to define a certain swapbuffer delay before reading them out, so e.g. calling glReadPixels on PBO i%3 and glMapBufferARB on PBO (i+2)%3 in the same thread, resulting in a delay of 2 frames. Also, when I call glMapBufferARB to use data in client memory, will this be the bottleneck or will glReadPixels (asynchronously) be the bottleneck? And finally, if you have some better ideas to speed up frame readback from GPU in opengl, please tell me, because this is a painful bottleneck in my current system. I hope my question is clear enough, I know the answer will probably also be somewhere on the internet but I mostly came up with results that used PBO's to keep buffers in video memory and do processing there. I really need to read back the front buffer to RAM and I do not find any clear explanations about performance in that case (which I need, I cannot rely on "it's faster", I need to explain why it's faster). Thank you

    Read the article

  • SQL SERVER – SSMS: Disk Usage Report

    - by Pinal Dave
    Let us start with humor!  I think we the series on various reports, we come to a logical point. We covered all the reports at server level. This means the reports we saw were targeted towards activities that are related to instance level operations. These are mostly like how a doctor diagnoses a patient. At this point I am reminded of a dialog which I read somewhere: Patient: Doc, It hurts when I touch my head. Doc: Ok, go on. What else have you experienced? Patient: It hurts even when I touch my eye, it hurts when I touch my arms, it even hurts when I touch my feet, etc. Doc: Hmmm … Patient: I feel it hurts when I touch anywhere in my body. Doc: Ahh … now I get it. You need a plaster to your finger John. Sometimes the server level gives an indicator to what is happening in the system, but we need to get to the root cause for a specific database. So, this is the first blog in series where we would start discussing about database level reports. To launch database level reports, expand selected server in Object Explorer, expand the Databases folder, and then right-click any database for which we want to look at reports. From the menu, select Reports, then Standard Reports, and then any of database level reports. In this blog, we would talk about four “disk” reports because they are similar: Disk Usage Disk Usage by Top Tables Disk Usage by Table Disk Usage by Partition Disk Usage This report shows multiple information about the database. Let us discuss them one by one.  We have divided the output into 5 different sections. Section 1 shows the high level summary of the database. It shows the space used by database files (mdf and ldf). Under the hood, the report uses, various DMVs and DBCC Commands, it is using sys.data_spaces and DBCC SHOWFILESTATS. Section 2 and 3 are pie charts. One for data file allocation and another for the transaction log file. Pie chart for “Data Files Space Usage (%)” shows space consumed data, indexes, allocated to the SQL Server database, and unallocated space which is allocated to the SQL Server database but not yet filled with anything. “Transaction Log Space Usage (%)” used DBCC SQLPERF (LOGSPACE) and shows how much empty space we have in the physical transaction log file. Section 4 shows the data from Default Trace and looks at Event IDs 92, 93, 94, 95 which are for “Data File Auto Grow”, “Log File Auto Grow”, “Data File Auto Shrink” and “Log File Auto Shrink” respectively. Here is an expanded view for that section. If default trace is not enabled, then this section would be replaced by the message “Trace Log is disabled” as highlighted below. Section 5 of the report uses DBCC SHOWFILESTATS to get information. Here is the enhanced version of that section. This shows the physical layout of the file. In case you have In-Memory Objects in the database (from SQL Server 2014), then report would show information about those as well. Here is the screenshot taken for a different database, which has In-Memory table. I have highlighted new things which are only shown for in-memory database. The new sections which are highlighted above are using sys.dm_db_xtp_checkpoint_files, sys.database_files and sys.data_spaces. The new type for in-memory OLTP is ‘FX’ in sys.data_space. The next set of reports is targeted to get information about a table and its storage. These reports can answer questions like: Which is the biggest table in the database? How many rows we have in table? Is there any table which has a lot of reserved space but its unused? Which partition of the table is having more data? Disk Usage by Top Tables This report provides detailed data on the utilization of disk space by top 1000 tables within the Database. The report does not provide data for memory optimized tables. Disk Usage by Table This report is same as earlier report with few difference. First Report shows only 1000 rows First Report does order by values in DMV sys.dm_db_partition_stats whereas second one does it based on name of the table. Both of the reports have interactive sort facility. We can click on any column header and change the sorting order of data. Disk Usage by Partition This report shows the distribution of the data in table based on partition in the table. This is so similar to previous output with the partition details now. Here is the query taken from profiler. SELECT row_number() OVER (ORDER BY a1.used_page_count DESC, a1.index_id) AS row_number ,      (dense_rank() OVER (ORDER BY a5.name, a2.name))%2 AS l1 ,      a1.OBJECT_ID ,      a5.name AS [schema] ,       a2.name ,       a1.index_id ,       a3.name AS index_name ,       a3.type_desc ,       a1.partition_number ,       a1.used_page_count * 8 AS total_used_pages ,       a1.reserved_page_count * 8 AS total_reserved_pages ,       a1.row_count FROM sys.dm_db_partition_stats a1 INNER JOIN sys.all_objects a2  ON ( a1.OBJECT_ID = a2.OBJECT_ID) AND a1.OBJECT_ID NOT IN (SELECT OBJECT_ID FROM sys.tables WHERE is_memory_optimized = 1) INNER JOIN sys.schemas a5 ON (a5.schema_id = a2.schema_id) LEFT OUTER JOIN  sys.indexes a3  ON ( (a1.OBJECT_ID = a3.OBJECT_ID) AND (a1.index_id = a3.index_id) ) WHERE (SELECT MAX(DISTINCT partition_number) FROM sys.dm_db_partition_stats a4 WHERE (a4.OBJECT_ID = a1.OBJECT_ID)) >= 1 AND a2.TYPE <> N'S' AND  a2.TYPE <> N'IT' ORDER BY a5.name ASC, a2.name ASC, a1.index_id, a1.used_page_count DESC, a1.partition_number Using all of the above reports, you should be able to get the usage of database files and also space used by tables. I think this is too much disk information for a single blog and I hope you have used them in the past to get data. Do let me know if you found anything interesting using these reports in your environments. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • Paypal IPN Handler for Cart

    - by kimmothy16
    Hey everyone! I'm using a Paypal add to cart button for a simple ecommerce website. I had a Paypal IPN handler written in PHP that was successfully working for 'Buy it Now' buttons and purchasing one item at a time. I would run a database query each time to update the store's inventory to reflect the purchase. Now I'm upgrading this store to a 'cart' version so people could check out with multiple items at a time. Would anyone be able to tell me, in general, how my IPN handler would need to be altered to accommodate this? I'm unsure of what a response from Paypal looks like for a cart purchase as opposed to a buy it now purchase. Thanks, any help or examples of working IPN cart scripts would be very appreciated! My current code is below.. // Paypal POSTs HTML FORM variables to this page // we must post all the variables back to paypal exactly unchanged and add an extra parameter cmd with value _notify-validate // initialise a variable with the requried cmd parameter $req = 'cmd=_notify-validate'; // go through each of the POSTed vars and add them to the variable foreach ($_POST as $key => $value) { $value = urlencode(stripslashes($value)); $req .= "&$key=$value"; } // post back to PayPal system to validate $header .= "POST /cgi-bin/webscr HTTP/1.0\r\n"; $header .= "Content-Type: application/x-www-form-urlencoded\r\n"; $header .= "Content-Length: " . strlen($req) . "\r\n\r\n"; // In a live application send it back to www.paypal.com // but during development you will want to uswe the paypal sandbox // comment out one of the following lines $fp = fsockopen ('www.sandbox.paypal.com', 80, $errno, $errstr, 30); //$fp = fsockopen ('www.paypal.com', 80, $errno, $errstr, 30); // or use port 443 for an SSL connection //$fp = fsockopen ('ssl://www.paypal.com', 443, $errno, $errstr, 30); if (!$fp) { // HTTP ERROR } else { fputs ($fp, $header . $req); while (!feof($fp)) { $res = fgets ($fp, 1024); if (strcmp ($res, "VERIFIED") == 0) { $item_name = stripslashes($_POST['item_name']); $item_number = $_POST['item_number']; $item_id = $_POST['custom']; $payment_status = $_POST['payment_status']; $payment_amount = $_POST['mc_gross']; //full amount of payment. payment_gross in US $payment_currency = $_POST['mc_currency']; $txn_id = $_POST['txn_id']; //unique transaction id $receiver_email = $_POST['receiver_email']; $payer_email = $_POST['payer_email']; $size = $_POST['option_selection1']; $item_id = $_POST['item_id']; $business = $_POST['business']; if ($payment_status == 'Completed') { // UPDATE THE DATABASE }

    Read the article

  • Reproduce PIPE functionality in IronPython

    - by Muppet Geoff
    Hi, I am hoping some genious out there can help me out with this... I am using sox to merge and resample a group of WAV files, and pipe the output directly to the input of NeroAACEnc for encoding to AAC format. I originally ran the process in a script, which included: sox.exe d:\audio\1.wav d:\audio\2.wav d:\audio\3.wav -c 1 -r 22050 -t wav - | neroAacEnc.exe -q 0.5 -if - -of test.m4a This worked as expected. The '-' in the comand line translates as 'Pipe/redirect input/output (stdin/stdout)' - So Sox pipes to stdout, and NeroAACEnc reads from stdin, the | joins them together. I then migrated the whole solution to Python, and the equivalent command became: from subprocess import call, Popen, PIPE runwav = Popen(['sox.exe', 'd:\audio\1.wav', 'd:\audio\2.wav', 'd:\audio\3.wav', '-c', '1', '-r', '22050', '-t', 'wav', '-'], shell=False, stdout=PIPE) runm4b = call(['neroAacEnc.exe', '-q', '0.5', '-if', '-', '-of', 'test.m4a'], shell=False, stdin=runwav.stdout) This also worked like a charm, exactly as expected. Slightly more convoluted, but hey :) Well now I have to move it to IronPython, and the Subprocess module isn't available (the partial implementation that is, doesn't have Popen/PIPE support - plus it seems silly to add a custom library when there is probably a native alternative). I should mention here, that I opted for IronPython over C#, because I am comfortable with Python now - however, there is a chance of moving it again later to C# native, and I am using IronPython to ease myself into it :) I have no C# or .net experience. So far I have the following equivalent, that sets up the 2 processes: from System.Diagnostics import Process wav = Process() wav.StartInfo.UseShellExecute = False wav.StartInfo.RedirectStandardOutput = True wav.StartInfo.FileName = 'sox.exe' wav.StartInfo.Arguments = 'd:\audio\1.wav d:\audio\2.wav d:\audio\3.wav -c 1 -r 22050 -t wav -' wav.Start() m4b = Process() m4b.StartInfo.UseShellExecute = False m4b.StartInfo.RedirectStandardInput = True m4b.StartInfo.FileName = 'neroAacEnc.exe' m4b.StartInfo.Arguments = '-q 0.5 -if - -of test.m4a' m4b.Start() I know that these 2 processes start (I can see Nero and Sox in the task manager) but what I can't figure out (for the life of me) is how to string the two output/input streams together, as with the previous two solutions. I have searched and searched, so I thought I'd ask! If anyone knows either: How to join the two streams with the same net result as the Python and Commandline versions; or A better way to acheive what I am trying to do. I would be extremely grateful! Many thanks in advance, Geoff P.S. A code sample based off the above would be awesome :) or a specific code example of a similar process that I can easily translate... this has broked my brayne.

    Read the article

  • JBOSS 7.1 started hanging after 6 months of deployment

    - by PVR
    My application is been live from 6 months. The application is host on jboss 7.1 server. From last few days I am finding numerous problem of hanging of jboss server. Though I restart the jboss server again, it does not invoke. I need to restart the server machine itself. Can anyone please let me know what could be the cause of these problems and the workable resolutions or any suggestion ? Kindly dont degrade the question as I am facing a lot problems due to this hanging issue. Also for the information, the application is based on Java, GWT, Hibernate 3. Please find the standalone.xml file in case if it helps. <extensions> <extension module="org.jboss.as.clustering.infinispan"/> <extension module="org.jboss.as.configadmin"/> <extension module="org.jboss.as.connector"/> <extension module="org.jboss.as.deployment-scanner"/> <extension module="org.jboss.as.ee"/> <extension module="org.jboss.as.ejb3"/> <extension module="org.jboss.as.jaxrs"/> <extension module="org.jboss.as.jdr"/> <extension module="org.jboss.as.jmx"/> <extension module="org.jboss.as.jpa"/> <extension module="org.jboss.as.logging"/> <extension module="org.jboss.as.mail"/> <extension module="org.jboss.as.naming"/> <extension module="org.jboss.as.osgi"/> <extension module="org.jboss.as.pojo"/> <extension module="org.jboss.as.remoting"/> <extension module="org.jboss.as.sar"/> <extension module="org.jboss.as.security"/> <extension module="org.jboss.as.threads"/> <extension module="org.jboss.as.transactions"/> <extension module="org.jboss.as.web"/> <extension module="org.jboss.as.webservices"/> <extension module="org.jboss.as.weld"/> </extensions> <system-properties> <property name="org.apache.coyote.http11.Http11Protocol.COMPRESSION" value="on"/> <property name="org.apache.coyote.http11.Http11Protocol.COMPRESSION_MIME_TYPES" value="text/javascript,text/css,text/html,text/xml,text/json"/> </system-properties> <management> <security-realms> <security-realm name="ManagementRealm"> <authentication> <properties path="mgmt-users.properties" relative-to="jboss.server.config.dir"/> </authentication> </security-realm> <security-realm name="ApplicationRealm"> <authentication> <properties path="application-users.properties" relative-to="jboss.server.config.dir"/> </authentication> </security-realm> </security-realms> <management-interfaces> <native-interface security-realm="ManagementRealm"> <socket-binding native="management-native"/> </native-interface> <http-interface security-realm="ManagementRealm"> <socket-binding http="management-http"/> </http-interface> </management-interfaces> </management> <profile> <subsystem xmlns="urn:jboss:domain:logging:1.1"> <console-handler name="CONSOLE"> <level name="INFO"/> <formatter> <pattern-formatter pattern="%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%E%n"/> </formatter> </console-handler> <periodic-rotating-file-handler name="FILE"> <formatter> <pattern-formatter pattern="%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%E%n"/> </formatter> <file relative-to="jboss.server.log.dir" path="server.log"/> <suffix value=".yyyy-MM-dd"/> <append value="true"/> </periodic-rotating-file-handler> <logger category="com.arjuna"> <level name="WARN"/> </logger> <logger category="org.apache.tomcat.util.modeler"> <level name="WARN"/> </logger> <logger category="sun.rmi"> <level name="WARN"/> </logger> <logger category="jacorb"> <level name="WARN"/> </logger> <logger category="jacorb.config"> <level name="ERROR"/> </logger> <root-logger> <level name="INFO"/> <handlers> <handler name="CONSOLE"/> <handler name="FILE"/> </handlers> </root-logger> </subsystem> <subsystem xmlns="urn:jboss:domain:configadmin:1.0"/> <subsystem xmlns="urn:jboss:domain:datasources:1.0"> <datasources> <datasource jndi-name="java:jboss/datasources/ExampleDS" pool-name="ExampleDS" enabled="true" use-java-context="true"> <connection-url>jdbc:h2:mem:test;DB_CLOSE_DELAY=-1</connection-url> <driver>h2</driver> <security> <user-name>sa</user-name> <password>sa</password> </security> </datasource> <drivers> <driver name="h2" module="com.h2database.h2"> <xa-datasource-class>org.h2.jdbcx.JdbcDataSource</xa-datasource-class> </driver> </drivers> </datasources> </subsystem> <subsystem xmlns="urn:jboss:domain:deployment-scanner:1.1"> <deployment-scanner path="deployments" relative-to="jboss.server.base.dir" scan-interval="5000"/> </subsystem> <subsystem xmlns="urn:jboss:domain:ee:1.0"/> <subsystem xmlns="urn:jboss:domain:ejb3:1.2"> <session-bean> <stateless> <bean-instance-pool-ref pool-name="slsb-strict-max-pool"/> </stateless> <stateful default-access-timeout="5000" cache-ref="simple"/> <singleton default-access-timeout="5000"/> </session-bean> <pools> <bean-instance-pools> <strict-max-pool name="slsb-strict-max-pool" max-pool-size="20" instance-acquisition-timeout="5" instance-acquisition-timeout-unit="MINUTES"/> <strict-max-pool name="mdb-strict-max-pool" max-pool-size="20" instance-acquisition-timeout="5" instance-acquisition-timeout-unit="MINUTES"/> </bean-instance-pools> </pools> <caches> <cache name="simple" aliases="NoPassivationCache"/> <cache name="passivating" passivation-store-ref="file" aliases="SimpleStatefulCache"/> </caches> <passivation-stores> <file-passivation-store name="file"/> </passivation-stores> <async thread-pool-name="default"/> <timer-service thread-pool-name="default"> <data-store path="timer-service-data" relative-to="jboss.server.data.dir"/> </timer-service> <remote connector-ref="remoting-connector" thread-pool-name="default"/> <thread-pools> <thread-pool name="default"> <max-threads count="10"/> <keepalive-time time="100" unit="milliseconds"/> </thread-pool> </thread-pools> </subsystem> <subsystem xmlns="urn:jboss:domain:infinispan:1.2" default-cache-container="hibernate"> <cache-container name="hibernate" default-cache="local-query"> <local-cache name="entity"> <transaction mode="NON_XA"/> <eviction strategy="LRU" max-entries="10000"/> <expiration max-idle="100000"/> </local-cache> <local-cache name="local-query"> <transaction mode="NONE"/> <eviction strategy="LRU" max-entries="10000"/> <expiration max-idle="100000"/> </local-cache> <local-cache name="timestamps"> <transaction mode="NONE"/> <eviction strategy="NONE"/> </local-cache> </cache-container> </subsystem> <subsystem xmlns="urn:jboss:domain:jaxrs:1.0"/> <subsystem xmlns="urn:jboss:domain:jca:1.1"> <archive-validation enabled="true" fail-on-error="true" fail-on-warn="false"/> <bean-validation enabled="true"/> <default-workmanager> <short-running-threads> <core-threads count="50"/> <queue-length count="50"/> <max-threads count="50"/> <keepalive-time time="10" unit="seconds"/> </short-running-threads> <long-running-threads> <core-threads count="50"/> <queue-length count="50"/> <max-threads count="50"/> <keepalive-time time="100" unit="seconds"/> </long-running-threads> </default-workmanager> <cached-connection-manager/> </subsystem> <subsystem xmlns="urn:jboss:domain:jdr:1.0"/> <subsystem xmlns="urn:jboss:domain:jmx:1.1"> <show-model value="true"/> <remoting-connector/> </subsystem> <subsystem xmlns="urn:jboss:domain:jpa:1.0"> <jpa default-datasource=""/> </subsystem> <subsystem xmlns="urn:jboss:domain:mail:1.0"> <mail-session jndi-name="java:jboss/mail/Default"> <smtp-server outbound-socket-binding-ref="mail-smtp"/> </mail-session> </subsystem> <subsystem xmlns="urn:jboss:domain:naming:1.1"/> <subsystem xmlns="urn:jboss:domain:osgi:1.2" activation="lazy"> <properties> <property name="org.osgi.framework.startlevel.beginning"> 1 </property> </properties> <capabilities> <capability name="javax.servlet.api:v25"/> <capability name="javax.transaction.api"/> <capability name="org.apache.felix.log" startlevel="1"/> <capability name="org.jboss.osgi.logging" startlevel="1"/> <capability name="org.apache.felix.configadmin" startlevel="1"/> <capability name="org.jboss.as.osgi.configadmin" startlevel="1"/> </capabilities> </subsystem> <subsystem xmlns="urn:jboss:domain:pojo:1.0"/> <subsystem xmlns="urn:jboss:domain:remoting:1.1"> <connector name="remoting-connector" socket-binding="remoting" security-realm="ApplicationRealm"/> </subsystem> <subsystem xmlns="urn:jboss:domain:resource-adapters:1.0"/> <subsystem xmlns="urn:jboss:domain:sar:1.0"/> <subsystem xmlns="urn:jboss:domain:security:1.1"> <security-domains> <security-domain name="other" cache-type="default"> <authentication> <login-module code="Remoting" flag="optional"> <module-option name="password-stacking" value="useFirstPass"/> </login-module> <login-module code="RealmUsersRoles" flag="required"> <module-option name="usersProperties" value="${jboss.server.config.dir}/application-users.properties"/> <module-option name="rolesProperties" value="${jboss.server.config.dir}/application-roles.properties"/> <module-option name="realm" value="ApplicationRealm"/> <module-option name="password-stacking" value="useFirstPass"/> </login-module> </authentication> </security-domain> <security-domain name="jboss-web-policy" cache-type="default"> <authorization> <policy-module code="Delegating" flag="required"/> </authorization> </security-domain> <security-domain name="jboss-ejb-policy" cache-type="default"> <authorization> <policy-module code="Delegating" flag="required"/> </authorization> </security-domain> </security-domains> </subsystem> <subsystem xmlns="urn:jboss:domain:threads:1.1"/> <subsystem xmlns="urn:jboss:domain:transactions:1.1"> <core-environment> <process-id> <uuid/> </process-id> </core-environment> <recovery-environment socket-binding="txn-recovery-environment" status-socket-binding="txn-status-manager"/> <coordinator-environment default-timeout="300"/> </subsystem> <subsystem xmlns="urn:jboss:domain:web:1.1" default-virtual-server="default-host" native="false"> <connector name="http" protocol="HTTP/1.1" scheme="http" socket-binding="http"/> <virtual-server name="default-host" enable-welcome-root="false"> <alias name="localhost"/> <alias name="nextenders.com"/> </virtual-server> </subsystem> <subsystem xmlns="urn:jboss:domain:webservices:1.1"> <modify-wsdl-address>true</modify-wsdl-address> <wsdl-host>${jboss.bind.address:127.0.0.1}</wsdl-host> <endpoint-config name="Standard-Endpoint-Config"/> <endpoint-config name="Recording-Endpoint-Config"> <pre-handler-chain name="recording-handlers" protocol-bindings="##SOAP11_HTTP ##SOAP11_HTTP_MTOM ##SOAP12_HTTP ##SOAP12_HTTP_MTOM"> <handler name="RecordingHandler" class="org.jboss.ws.common.invocation.RecordingServerHandler"/> </pre-handler-chain> </endpoint-config> </subsystem> <subsystem xmlns="urn:jboss:domain:weld:1.0"/> </profile> <interfaces> <interface name="management"> <inet-address value="${jboss.bind.address.management:127.0.0.1}"/> </interface> <interface name="public"> <inet-address value="${jboss.bind.address:127.0.0.1}"/> </interface> <interface name="unsecure"> <inet-address value="${jboss.bind.address.unsecure:127.0.0.1}"/> </interface> </interfaces> <socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}"> <socket-binding name="management-native" interface="management" port="${jboss.management.native.port:9999}"/> <socket-binding name="management-http" interface="management" port="${jboss.management.http.port:9990}"/> <socket-binding name="management-https" interface="management" port="${jboss.management.https.port:9443}"/> <socket-binding name="ajp" port="8009"/> <socket-binding name="http" port="80"/> <socket-binding name="https" port="443"/> <socket-binding name="osgi-http" interface="management" port="8090"/> <socket-binding name="remoting" port="4447"/> <socket-binding name="txn-recovery-environment" port="4712"/> <socket-binding name="txn-status-manager" port="4713"/> <outbound-socket-binding name="mail-smtp"> <remote-destination host="localhost" port="25"/> </outbound-socket-binding> </socket-binding-group>

    Read the article

  • Silverlight ViewBase in separate assembly - possible?

    - by Mark
    I have all my views in a project inheriting from a ViewBase class that inherits from UserControl. In my XAML I reference it thus: <f:ViewBase x:Class="Forte.UI.Modules.Configure.Views.AddNewEmployeeView" xmlns:f="clr-namespace:Forte.UI.Modules.Configure.Views" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" It works fine. Now I have moved the ViewBase to another project (so I can refernce it from multiple projects) so I reference it like: <f:ViewBase x:Class="Forte.UI.Modules.Configure.Views.AddNewEmployeeView" xmlns:f="clr-namespace:Forte.UI.Modules.Common.Views;assembly=Forte.UI.Modules.Common" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" This works fine when I run from the IDE but when I run the same sln from MSBuild it gives a warning: "H:\dev\ExternalCopy\Code\UI\Modules\Configure\Forte.UI.Modules.Configure.csproj" (default target) (10:12) - (ValidateXaml target) - H:\dev\ExternalCopy\Code\UI\Modules\Configure\Views\AddNewEmployee\AddNewEmployeeView.xaml(1,2,1,2): warning : The tag 'ViewBase' does not exist in XML namespace 'clr-namespace:Forte.UI.Modules.Common.Views;assembly=Forte.UI.Modules.Common'. Then fails with: "H:\dev\ExternalCopy\Code\UI\Modules\Configure\Forte.UI.Modules.Configure.csproj" (default target) (10:12) - (ValidateXaml target) - C:\Program Files\MSBuild\Microsoft\Silverlight\v3.0\Microsoft.Silverlight.Common.targets(210,9): error MSB4018: The "ValidateXaml" task failed unexpectedly.\r C:\Program Files\MSBuild\Microsoft\Silverlight\v3.0\Microsoft.Silverlight.Common.targets(210,9): er ror MSB4018: System.NullReferenceException: Object reference not set to an instance of an object.\r C:\Program Files\MSBuild\Microsoft\Silverlight\v3.0\Microsoft.Silverlight.Common.targets(210,9): er ror MSB4018: at MS.MarkupCompiler.ValidationPass.ValidateXaml(String fileName, Assembly[] assemb lies, Assembly callingAssembly, TaskLoggingHelper log, Boolean shouldThrow)\r C:\Program Files\MSBuild\Microsoft\Silverlight\v3.0\Microsoft.Silverlight.Common.targets(210,9): er ror MSB4018: at Microsoft.Silverlight.Build.Tasks.ValidateXaml.XamlValidator.Execute()\r C:\Program Files\MSBuild\Microsoft\Silverlight\v3.0\Microsoft.Silverlight.Common.targets(210,9): er ror MSB4018: at Microsoft.Silverlight.Build.Tasks.ValidateXaml.XamlValidator.Execute()\r C:\Program Files\MSBuild\Microsoft\Silverlight\v3.0\Microsoft.Silverlight.Common.targets(210,9): er ror MSB4018: at Microsoft.Silverlight.Build.Tasks.ValidateXaml.Execute()\r C:\Program Files\MSBuild\Microsoft\Silverlight\v3.0\Microsoft.Silverlight.Common.targets(210,9): er ror MSB4018: at Microsoft.Build.BuildEngine.TaskEngine.ExecuteInstantiatedTask(EngineProxy engin eProxy, ItemBucket bucket, TaskExecutionMode howToExecuteTask, ITask task, Boolean& taskResult) Any ideas what might be causing this behaviour? Using Silverlight 3 Here is a cut down version of the MSBuild file that fails to build the sln that builds fine in the IDE (sorry couldn't get it to format here): <?xml version="1.0" encoding="utf-8" ?> <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" DefaultTargets="Compile"> <ItemGroup> <ProjectToBuild Include="..\UI\Forte.UI.sln"> <Properties>Configuration=Debug</Properties> </ProjectToBuild> </ItemGroup> <Target Name="Compile"> <MSBuild Projects="@(ProjectToBuild)"></MSBuild> </Target> </Project> Thanks for any help!

    Read the article

  • Run bat file in Java and wait 2

    - by Savvas Dalkitsis
    This is a followup question to my other question : http://stackoverflow.com/questions/2434125/run-bat-file-in-java-and-wait The reason i am posting this as a separate question is that the one i already asked was answered correctly. From some research i did my problem is unique to my case so i decided to create a new question. Please go read that question before continuing with this one as they are closely related. Running the proposed code blocks the program at the waitFor invocation. After some research i found that the waitFor method blocks if your process has output that needs to be proccessed so you should first empty the output stream and the error stream. I did those things but my method still blocks. I then found a suggestion to simply loop while waiting the exitValue method to return the exit value of the process and handle the exception thrown if it is not, pausing for a brief moment as well so as not to consume all the CPU. I did this: import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; public class Test { public static void main(String[] args) { try { Process p = Runtime.getRuntime().exec( "cmd /k start SQLScriptsToRun.bat" + " -UuserName -Ppassword" + " projectName"); final BufferedReader input = new BufferedReader(new InputStreamReader(p.getInputStream())); final BufferedReader error = new BufferedReader(new InputStreamReader(p.getErrorStream())); new Thread(new Runnable() { @Override public void run() { try { while (input.readLine()!=null) {} } catch (IOException e) { e.printStackTrace(); } } }).start(); new Thread(new Runnable() { @Override public void run() { try { while (error.readLine()!=null) {} } catch (IOException e) { e.printStackTrace(); } } }).start(); int i = 0; boolean finished = false; while (!finished) { try { i = p.exitValue(); finished = true; } catch (IllegalThreadStateException e) { e.printStackTrace(); try { Thread.sleep(500); } catch (InterruptedException e1) { e1.printStackTrace(); } } } System.out.println(i); } catch (IOException e) { e.printStackTrace(); } } } but my process will not end! I keep getting this error: java.lang.IllegalThreadStateException: process has not exited Any ideas as to why my process will not exit? Or do you have any libraries to suggest that handle executing batch files properly and wait until the execution is finished?

    Read the article

  • Why aren't the :locals hash variables being passed in to a partial, when called from inside my rake

    - by marshally
    I need to render a bunch of painfully long running partials using a rake task. When I try to pull the partial from a rake task, I get the dreaded "Called id for nil, which would mistakenly be 4" error, which usually means that my locals hash has not been properly set into the partial. Here's the rake task (some variable names have been changed to protect the innocent): namespace :precache do desc "Precache stuff" task :precache => :environment do av = ActionView::Base.new(Rails::Configuration.new.view_path, {}) av.class_eval do include ApplicationHelper end @user = User.find(21) @rank = Rank.find(2) data = av.render(:partial => "reports/listing", :locals => {:user => @user, :rank => @rank}) end end And this is the error that I am getting: ** Execute precache:precache rake aborted! Called id for nil, which would mistakenly be 4 -- if you really wanted the id of nil, use object_id On line #1 of app/views/reports/listing.html.erb 1: <%- @rid = @rank.id %> 2: <%- @cid = @user.id %> 3: <%- cache(:action => 'reports', :key => [arg1, arg2, arg3] ) do %> 4: <%- app/views/reports/_downline_js.html.erb:1 lib/tasks/precache_fragments.rake:12 rake (0.8.7) lib/rake.rb:636:in `call' rake (0.8.7) lib/rake.rb:636:in `execute' rake (0.8.7) lib/rake.rb:631:in `each' rake (0.8.7) lib/rake.rb:631:in `execute' rake (0.8.7) lib/rake.rb:597:in `invoke_with_call_chain' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/monitor.rb:242:in `synchronize' rake (0.8.7) lib/rake.rb:590:in `invoke_with_call_chain' rake (0.8.7) lib/rake.rb:583:in `invoke' rake (0.8.7) lib/rake.rb:2051:in `invoke_task' rake (0.8.7) lib/rake.rb:2029:in `top_level' rake (0.8.7) lib/rake.rb:2029:in `each' rake (0.8.7) lib/rake.rb:2029:in `top_level' rake (0.8.7) lib/rake.rb:2068:in `standard_exception_handling' rake (0.8.7) lib/rake.rb:2023:in `top_level' rake (0.8.7) lib/rake.rb:2001:in `run' rake (0.8.7) lib/rake.rb:2068:in `standard_exception_handling' rake (0.8.7) lib/rake.rb:1998:in `run' rake (0.8.7) bin/rake:31 /usr/bin/rake:19:in `load' /usr/bin/rake:19 details: I'm using Rails 2.3.5 and Ruby 1.8.7. Developing on Mac OSX. Eventually I will be deploying to Heroku.

    Read the article

  • Windows Azure Myths

    - by BuckWoody
    Windows Azure is part of the Microsoft "stack" - the suite of software and services we offer. Because we have so many products in almost every part of technology, it's hard to know everything about all parts of what we do - even for those of us who work here. So it's no surprise that some folks are not as familiar with Windows and SQL Azure as they are, say Windows Server or XBox. As I chat with folks about a solution for a business or organization need, I put Windows Azure into the mix. I always start off with "What do you already know about Windows Azure?" so that I don't bore folks with information they already have. I some cases they've checked out the product ahead of time and have specific questions, in others they aren't as familiar, and in still others there is a fair amount of mis-information. Sometimes that's because of a marketing failure, sometimes it's hearsay, and somtetimes it's active misinformation. I thought I might lay out a few of these misconceptions. As always - do your fact-checking! Never take anyone's word alone (including mine) as gospel. Make sure you educate yourself on your options. Your company or your clients depend on you to have the right information on IT, so make sure you live up to that. Myth 1: Nobody uses Windows Azure It's true that we don't give out numbers on the amount of clients on Windows and SQL Azure. But lots of folks are here - companies you may have heard of like Boeing, NASA, Fujitsu, The City of London, Nuedesic, and many others. I deal with firms small and large that use Windows Azure for mission-critical applications, sometimes totally on Windows and/or SQL Azure, sometimes in conjunction with an on-premises system, sometimes for only a specific component in Windows Azure like storage. The interesting thing is that many sites you visit have a Windows Azure component, or are running on Windows Azure. They just don't announce it. Just like the other cloud providers, the companies have asked to be completely branded themselves - they don't want you to be aware or care that they are on Windows Azure. Sometimes that's for security, other times it's for different reasons. It's just like the web sites you visit. For the most part, they don't advertise which OS or Web Server they use. It really just shouldn't matter. The point is that they just use what works to solve a given problem. Check out a few public case studies here: https://www.windowsazure.com/en-us/home/case-studies/ Myth 2: It's only for Microsoft stuff - can't use Open Source This is the one I face the most, and am the most dismayed by. We work just fine with many open source products, including Java, NodeJS, PHP, Ruby, Python, Hadoop, and many other languages and applications. You can quickly deploy a Wordpress, Umbraco and other "kits". We have software development kits (SDK's) for iPhones, iPads, Android, Windows phones and more. We have an SDK to work with FaceBook and other social networks. In short, we play well with others. More on the languages and runtimes we support here: https://www.windowsazure.com/en-us/develop/overview/ More on the SDK's here: http://www.wadewegner.com/2011/05/windows-azure-toolkit-for-ios/, http://www.wadewegner.com/2011/08/windows-azure-toolkits-for-devices-now-with-android/, http://azuretoolkit.codeplex.com/ Myth 3: Microsoft expects me to switch everything to "the cloud" No, we don't. That would be disasterous, unless the only things you run in your company uses works perfectly in Azure. Use Windows Azure  - or any cloud for that matter - where it works. Whenever I talk to companies, I focus on two things: Something that is broken and needs to be re-architected Something you want to do that is new If something is broken, and you need new tools to scale, extend, add capacity dynamically and so on, then you can consider using Windows or SQL Azure. It can help solve problems that you have, or it may include a component you don't want to write or architect yourself. Sometimes you want to do something new, like extend your company's offerings to mobile phones, to the web, or to a social network. More info on where it works here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx Myth 4: I have to write code to use Windows and SQL Azure If Windows Azure is a PaaS - a Platform as a Service - then don't you have to write code to use it? Nope. Windows and SQL Azure are made up of various components. Some of those components allow you to write and deploy code (like Compute) and others don't. We have lots of customers using Windows Azure storage as a backup, to securely share files instead of using DropBox, to distribute videos or code or firmware, and more. Others use our High Performance Computing (HPC) offering to rent a supercomputer when they need one. You can even throw workloads at that using Excel! In addition there are lots of other components in Windows Azure you can use, from the Windows Azure Media Services to others. More here: https://www.windowsazure.com/en-us/home/scenarios/saas/ Myth 5: Windows Azure is just another form of "vendor lock-in" Windows Azure uses .NET, OSS languages and standard interfaces for the code. Sure, you're not going to take the code line-for-line and run it on a mainframe, but it's standard code that you write, and can port to something else. And the data is yours - you can bring it back whever you want. It's either in text or binary form, that you have complete control over. There are no licenses - you can "pay as you go", and when you're done, you can leave the service and take all your code, data and IP with you.   So go out there, read up, try it. Use it where it works. And don't believe everything you hear - sometimes the Internet doesn't get it all correct. :)

    Read the article

  • Installing a DHCP Service On Win2k8 ( Windows Server 2008 )

    - by Akshay Deep Lamba
    Introduction Dynamic Host Configuration Protocol (DHCP) is a core infrastructure service on any network that provides IP addressing and DNS server information to PC clients and any other device. DHCP is used so that you do not have to statically assign IP addresses to every device on your network and manage the issues that static IP addressing can create. More and more, DHCP is being expanded to fit into new network services like the Windows Health Service and Network Access Protection (NAP). However, before you can use it for more advanced services, you need to first install it and configure the basics. Let’s learn how to do that. Installing Windows Server 2008 DHCP Server Installing Windows Server 2008 DCHP Server is easy. DHCP Server is now a “role” of Windows Server 2008 – not a windows component as it was in the past. To do this, you will need a Windows Server 2008 system already installed and configured with a static IP address. You will need to know your network’s IP address range, the range of IP addresses you will want to hand out to your PC clients, your DNS server IP addresses, and your default gateway. Additionally, you will want to have a plan for all subnets involved, what scopes you will want to define, and what exclusions you will want to create. To start the DHCP installation process, you can click Add Roles from the Initial Configuration Tasks window or from Server Manager à Roles à Add Roles. Figure 1: Adding a new Role in Windows Server 2008 When the Add Roles Wizard comes up, you can click Next on that screen. Next, select that you want to add the DHCP Server Role, and click Next. Figure 2: Selecting the DHCP Server Role If you do not have a static IP address assigned on your server, you will get a warning that you should not install DHCP with a dynamic IP address. At this point, you will begin being prompted for IP network information, scope information, and DNS information. If you only want to install DHCP server with no configured scopes or settings, you can just click Next through these questions and proceed with the installation. On the other hand, you can optionally configure your DHCP Server during this part of the installation. In my case, I chose to take this opportunity to configure some basic IP settings and configure my first DHCP Scope. I was shown my network connection binding and asked to verify it, like this: Figure 3: Network connection binding What the wizard is asking is, “what interface do you want to provide DHCP services on?” I took the default and clicked Next. Next, I entered my Parent Domain, Primary DNS Server, and Alternate DNS Server (as you see below) and clicked Next. Figure 4: Entering domain and DNS information I opted NOT to use WINS on my network and I clicked Next. Then, I was promoted to configure a DHCP scope for the new DHCP Server. I have opted to configure an IP address range of 192.168.1.50-100 to cover the 25+ PC Clients on my local network. To do this, I clicked Add to add a new scope. As you see below, I named the Scope WBC-Local, configured the starting and ending IP addresses of 192.168.1.50-192.168.1.100, subnet mask of 255.255.255.0, default gateway of 192.168.1.1, type of subnet (wired), and activated the scope. Figure 5: Adding a new DHCP Scope Back in the Add Scope screen, I clicked Next to add the new scope (once the DHCP Server is installed). I chose to Disable DHCPv6 stateless mode for this server and clicked Next. Then, I confirmed my DHCP Installation Selections (on the screen below) and clicked Install. Figure 6: Confirm Installation Selections After only a few seconds, the DHCP Server was installed and I saw the window, below: Figure 7: Windows Server 2008 DHCP Server Installation succeeded I clicked Close to close the installer window, then moved on to how to manage my new DHCP Server. How to Manage your new Windows Server 2008 DHCP Server Like the installation, managing Windows Server 2008 DHCP Server is also easy. Back in my Windows Server 2008 Server Manager, under Roles, I clicked on the new DHCP Server entry. Figure 8: DHCP Server management in Server Manager While I cannot manage the DHCP Server scopes and clients from here, what I can do is to manage what events, services, and resources are related to the DHCP Server installation. Thus, this is a good place to go to check the status of the DHCP Server and what events have happened around it. However, to really configure the DHCP Server and see what clients have obtained IP addresses, I need to go to the DHCP Server MMC. To do this, I went to Start à Administrative Tools à DHCP Server, like this: Figure 9: Starting the DHCP Server MMC When expanded out, the MMC offers a lot of features. Here is what it looks like: Figure 10: The Windows Server 2008 DHCP Server MMC The DHCP Server MMC offers IPv4 & IPv6 DHCP Server info including all scopes, pools, leases, reservations, scope options, and server options. If I go into the address pool and the scope options, I can see that the configuration we made when we installed the DHCP Server did, indeed, work. The scope IP address range is there, and so are the DNS Server & default gateway. Figure 11: DHCP Server Address Pool Figure 12: DHCP Server Scope Options So how do we know that this really works if we do not test it? The answer is that we do not. Now, let’s test to make sure it works. How do we test our Windows Server 2008 DHCP Server? To test this, I have a Windows Vista PC Client on the same network segment as the Windows Server 2008 DHCP server. To be safe, I have no other devices on this network segment. I did an IPCONFIG /RELEASE then an IPCONFIG /RENEW and verified that I received an IP address from the new DHCP server, as you can see below: Figure 13: Vista client received IP address from new DHCP Server Also, I went to my Windows 2008 Server and verified that the new Vista client was listed as a client on the DHCP server. This did indeed check out, as you can see below: Figure 14: Win 2008 DHCP Server has the Vista client listed under Address Leases With that, I knew that I had a working configuration and we are done!

    Read the article

  • Scripting Language Sessions at Oracle OpenWorld and MySQL Connect, 2012

    - by cj
    This posts highlights some great scripting language sessions coming up at the Oracle OpenWorld and MySQL Connect conferences. These events are happening in San Francisco from the end of September. You can search for other interesting conference sessions in the Content Catalog. Also check out what is happening at JavaOne in that event's Content Catalog (I haven't included sessions from it in this post.) To find the timeslots and locations of each session, click their respective link and check the "Session Schedule" box on the top right. GEN8431 - General Session: What’s New in Oracle Database Application Development This general session takes a look at what’s been new in the last year in Oracle Database application development tools using the latest generation of database technology. Topics range from Oracle SQL Developer and Oracle Application Express to Java and PHP. (Thomas Kyte - Architect, Oracle) BOF9858 - Meet the Developers of Database Access Services (OCI, ODBC, DRCP, PHP, Python) This session is your opportunity to meet in person the Oracle developers who have built Oracle Database access tools and products such as the Oracle Call Interface (OCI), Oracle C++ Call Interface (OCCI), and Open Database Connectivity (ODBC) drivers; Transparent Application Failover (TAF); Oracle Database Instant Client; Database Resident Connection Pool (DRCP); Oracle Net Services, and so on. The team also works with those who develop the PHP, Ruby, Python, and Perl adapters for Oracle Database. Come discuss with them the features you like, your pains, and new product enhancements in the latest database technology. CON8506 - Syndication and Consolidation: Oracle Database Driver for MySQL Applications This technical session presents a new Oracle Database driver that enables you to run MySQL applications (written in PHP, Perl, C, C++, and so on) against Oracle Database with almost no code change. Use cases for such a driver include application syndication such as interoperability across a relationship database management system, application migration, and database consolidation. In addition, the session covers enhancements in database technology that enable and simplify the migration of third-party databases and applications to and consolidation with Oracle Database. Attend this session to learn more and see a live demo. (Srinath Krishnaswamy - Director, Software Development, Oracle. Kuassi Mensah - Director Product Management, Oracle. Mohammad Lari - Principal Technical Staff, Oracle ) CON9167 - Current State of PHP and MySQL Together, PHP and MySQL power large parts of the Web. The developers of both technologies continue to enhance their software to ensure that developers can be satisfied despite all their changing and growing needs. This session presents an overview of changes in PHP 5.4, which was released earlier this year and shows you various new MySQL-related features available for PHP, from transparent client-side caching to direct support for scaling and high-availability needs. (Johannes Schlüter - SoftwareDeveloper, Oracle) CON8983 - Sharding with PHP and MySQL In deploying MySQL, scale-out techniques can be used to scale out reads, but for scaling out writes, other techniques have to be used. To distribute writes over a cluster, it is necessary to shard the database and store the shards on separate servers. This session provides a brief introduction to traditional MySQL scale-out techniques in preparation for a discussion on the different sharding techniques that can be used with MySQL server and how they can be implemented with PHP. You will learn about static and dynamic sharding schemes, their advantages and drawbacks, techniques for locating and moving shards, and techniques for resharding. (Mats Kindahl - Senior Principal Software Developer, Oracle) CON9268 - Developing Python Applications with MySQL Utilities and MySQL Connector/Python This session discusses MySQL Connector/Python and the MySQL Utilities component of MySQL Workbench and explains how to write MySQL applications in Python. It includes in-depth explanations of the features of MySQL Connector/Python and the MySQL Utilities library, along with example code to illustrate the concepts. Those interested in learning how to expand or build their own utilities and connector features will benefit from the tips and tricks from the experts. This session also provides an opportunity to meet directly with the engineers and provide feedback on your issues and priorities. You can learn what exists today and influence future developments. (Geert Vanderkelen - Software Developer, Oracle) BOF9141 - MySQL Utilities and MySQL Connector/Python: Python Developers, Unite! Come to this lively discussion of the MySQL Utilities component of MySQL Workbench and MySQL Connector/Python. It includes in-depth explanations of the features and dives into the code for those interested in learning how to expand or build their own utilities and connector features. This is an audience-driven session, so put on your best Python shirt and let’s talk about MySQL Utilities and MySQL Connector/Python. (Geert Vanderkelen - Software Developer, Oracle. Charles Bell - Senior Software Developer, Oracle) CON3290 - Integrating Oracle Database with a Social Network Facebook, Flickr, YouTube, Google Maps. There are many social network sites, each with their own APIs for sharing data with them. Most developers do not realize that Oracle Database has base tools for communicating with these sites, enabling all manner of information, including multimedia, to be passed back and forth between the sites. This technical presentation goes through the methods in PL/SQL for connecting to, and then sending and retrieving, all types of data between these sites. (Marcelle Kratochvil - CTO, Piction) CON3291 - Storing and Tuning Unstructured Data and Multimedia in Oracle Database Database administrators need to learn new skills and techniques when the decision is made in their organization to let Oracle Database manage its unstructured data. They will face new scalability challenges. A single row in a table can become larger than a whole database. This presentation covers the techniques a DBA needs for managing the large volume of data in a standard Oracle Database instance. (Marcelle Kratochvil - CTO, Piction) CON3292 - Using PHP, Perl, Visual Basic, Ruby, and Python for Multimedia in Oracle Database These five programming languages are just some of the most popular ones in use at the moment in the marketplace. This presentation details how you can use them to access and retrieve multimedia from Oracle Database. It covers programming techniques and methods for achieving faster development against Oracle Database. (Marcelle Kratochvil - CTO, Piction) UGF5181 - Building Real-World Oracle DBA Tools in Perl Perl is not normally associated with building mission-critical application or DBA tools. Learn why Perl could be a good choice for building your next killer DBA app. This session draws on real-world experience of building DBA tools in Perl, showing the framework and architecture needed to deal with portability, efficiency, and maintainability. Topics include Perl frameworks; Which Comprehensive Perl Archive Network (CPAN) modules are good to use; Perl and CPAN module licensing; Perl and Oracle connectivity; Compiling and deploying your app; An example of what is possible with Perl. (Arjen Visser - CEO & CTO, Dbvisit Software Limited) CON3153 - Perl: A DBA’s and Developer’s Best (Forgotten) Friend This session reintroduces Perl as a language of choice for many solutions for DBAs and developers. Discover what makes Perl so successful and why it is so versatile in our day-to-day lives. Perl can automate all those manual tasks and is truly platform-independent. Perl may not be in the limelight the way other languages are, but it is a remarkable language, it is still very current with ongoing development, and it has amazing online resources. Learn what makes Perl so great (including CPAN), get an introduction to Perl language syntax, find out what you can use Perl for, hear how Oracle uses Perl, discover the best way to learn Perl, and take away a small Perl project challenge. (Arjen Visser - CEO & CTO, Dbvisit Software Limited) CON10332 - Oracle RightNow CX Cloud Service’s Connect PHP API: Intro, What’s New, and Roadmap Connect PHP is a public API that enables developers to build solutions with the Oracle RightNow CX Cloud Service platform. This API is used primarily by developers working within the Oracle RightNow Customer Portal Cloud Service framework who are looking to gain access to data and services hosted by the Oracle RightNow CX Cloud Service platform through a backward-compatible API. Connect for PHP leverages the same data model and services as the Connect Web Services for SOAP API. Come to this session to get an introduction and learn what’s new and what’s coming up. (Mark Rhoads - Senior Principal Applications Engineer, Oracle. Mark Ericson - Sr. Principle Product Manager, Oracle) CON10330 - Oracle RightNow CX Cloud Service APIs and Frameworks Overview Oracle RightNow CX Cloud Service APIs are available in the following areas: desktop UI, Web services, customer portal, PHP, and knowledge. These frameworks provide access to Oracle RightNow CX Cloud Service’s Connect Common Object Model and custom objects. This session provides a broad overview of capabilities in all these areas. (Mark Ericson - Sr. Principle Product Manager, Oracle)

    Read the article

  • trying to build Boost MPI, but the lib files are not created. What's going on?

    - by unknownthreat
    I am trying to run a program with Boost MPI, but the thing is I don't have the .lib. So I try to create one by following the instruction at http://www.boost.org/doc/libs/1_43_0/doc/html/mpi/getting_started.html#mpi.config The instruction says "For many users using LAM/MPI, MPICH, or OpenMPI, configuration is almost automatic", I got myself OpenMPI in C:\, but I didn't do anything more with it. Do we need to do anything with it? Beside that, another statement from the instruction: "If you don't already have a file user-config.jam in your home directory, copy tools/build/v2/user-config.jam there." Well, I simply do what it says. I got myself "user-config.jam" in C:\boost_1_43_0 along with "using mpi ;" into the file. Next, this is what I've done: bjam --with-mpi C:\boost_1_43_0>bjam --with-mpi WARNING: No python installation configured and autoconfiguration failed. See http://www.boost.org/libs/python/doc/building.html for configuration instructions or pass --without-python to suppress this message and silently skip all Boost.Python targets Building the Boost C++ Libraries. warning: skipping optional Message Passing Interface (MPI) library. note: to enable MPI support, add "using mpi ;" to user-config.jam. note: to suppress this message, pass "--without-mpi" to bjam. note: otherwise, you can safely ignore this message. warning: Unable to construct ./stage-unversioned warning: Unable to construct ./stage-unversioned Component configuration: - date_time : not building - filesystem : not building - graph : not building - graph_parallel : not building - iostreams : not building - math : not building - mpi : building - program_options : not building - python : not building - random : not building - regex : not building - serialization : not building - signals : not building - system : not building - test : not building - thread : not building - wave : not building ...found 1 target... The Boost C++ Libraries were successfully built! The following directory should be added to compiler include paths: C:\boost_1_43_0 The following directory should be added to linker library paths: C:\boost_1_43_0\stage\lib C:\boost_1_43_0> I see that there are many libs in C:\boost_1_43_0\stage\lib, but I see no trace of libboost_mpi-vc100-mt-1_43.lib or libboost_mpi-vc100-mt-gd-1_43.lib at all. These are the libraries required for linking in mpi applications. What could possibly gone wrong when libraries are not being built?

    Read the article

  • Facebook Connect 'next' error

    - by Mark
    I am trying to experiment with the new facebook authentication system, and I can't getthe login to work. I'm getting the following error message: API Error Code: 100 API Error Description: Invalid parameter Error Message: next is not owned by the application. The url that is being sent to facebook is: http://www.facebook.com/connect/uiserver.php?app_id=444444444444444&next=http%3A%2F%2Fstatic.ak.fbcdn.net%2Fconnect%2Fxd_proxy.php%23%3F%3D%26cb%3Df357eceb0361a8a%26origin%3Dhttp%253A%252F%252Fwww.mysite.com%252Ff38fea4f9ea573%26relation%3Dopener%26transport%3Dpostmessage%26frame%3Df23b800f8a78%26result%3DxxRESULTTOKENxx&display=popup&channel=http%3A%2F%2Fwww.mysite.com%2Ffbtester.php&cancel=http%3A%2F%2Fstatic.ak.fbcdn.net%2Fconnect%2Fxd_proxy.php%23%3F%3D%26cb%3Df6095a98598be8%26origin%3Dhttp%253A%252F%252Fwww.mysite.com%252Ff38fea4f9ea573%26relation%3Dopener%26transport%3Dpostmessage%26frame%3Df23b800f8a78%26result%3DxxRESULTTOKENxx&locale=en_US&return_session=1&session_version=3&fbconnect=1&canvas=0&legacy_return=1&method=permissions.request Note that the 'Next' variable in the url is: next=http%3A%2F%2Fstatic.ak.fbcdn.net%2Fconnect%2Fxd_proxy.php%23%3F%3D%26cb%3Df357eceb0361a8a%26origin%3Dhttp%253A%252F%252Fwww.mysite.com%252Ff38fea4f9ea573%26relation%3Dopener%26transport%3Dpostmessage%26frame%3Df23b800f8a78%26result%3DxxRESULTTOKENxx Any ideas what could be going wrong? All I've done is copy and paste the facebook login demo code from facebook's website: <?php define('FACEBOOK_APP_ID', 'your application id'); define('FACEBOOK_SECRET', 'your application secret'); function get_facebook_cookie($app_id, $application_secret) { $args = array(); parse_str(trim($_COOKIE['fbs_' . $app_id], '\\"'), $args); ksort($args); $payload = ''; foreach ($args as $key => $value) { if ($key != 'sig') { $payload .= $key . '=' . $value; } } if (md5($payload . $application_secret) != $args['sig']) { return null; } return $args; } $cookie = get_facebook_cookie(FACEBOOK_APP_ID, FACEBOOK_SECRET); ?> <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:fb="http://www.facebook.com/2008/fbml"> <body> <?php if ($cookie) { ?> Your user ID is <?= $cookie['uid'] ?> <?php } else { ?> <fb:login-button></fb:login-button> <?php } ?> <div id="fb-root"></div> <script src="http://connect.facebook.net/en_US/all.js"></script> <script> FB.init({appId: '<?= FACEBOOK_APP_ID ?>', status: true, cookie: true, xfbml: true}); FB.Event.subscribe('auth.login', function(response) { window.location.reload(); }); </script> </body> </html> Thanks for the help!

    Read the article

  • ASP.NET MVC 2 throws exception for ‘favicon.ico’

    - by nmarun
    I must be on fire or something – third blog in 2 days… awesome! Before I begin, in case you’re wondering, favicon.ico is the small image that appears to the left of your web address, once the page loads. In order to learn more about MVC or any thing for that matter, it’s better to look at the source itself. Since MVC is open source (at least some part of it is), I started looking at the source code that’s available for download. While doing so, I hit Steve Sanderson’s blog site where he explains in great detail the way to debug your app using ASP.NET MVC source code. For those who are not aware, Steve Sanderson’s book - Pro ASP.NET MVC Framework, is one of the best books to learn about MVC. Alrighty, I followed the article and I hit F5 to debug the default / unchanged MVC project. I put a breakpoint in the DefaultControllerFactory.cs, CreateController() method. To know a little more about this class and the method, read this. Sure enough, the control stopped at the breakpoint and I hit F5 again and the page rendered just fine. But then what’s this? The breakpoint was hit again, as if something else was being requested. I now hovered my mouse over the ‘controllerName’ parameter and it says – favicon.ico. This by itself was more than enough for me to raise my eye-brows, but what happened next just took the ground below my feet. Oh, oh, I’m sorry I’m just typing, no code, no image, so here are a couple of screen captures. The first one shows the request for the Home controller; I get ‘Home’ when I hover over the parameter: And here’s the one that shows the same for call for ‘favicon.ico’. So, I step through the code and when the control reaches line 91 – GetControllerInstance() method, I step in. This is when I had the ‘ground-losing’ experience. Wow, an exception is being thrown for this file and that too in RTM. For some reason MVC thinks, this as a controller and tries to run it through the MvcHandler and it hits this snag. So it seems like this will happen for any MVC 2 site and this did not happen for me in the previous version of MVC. Before I get to how to resolve it, here’s another way of reproducing this exception. Revert back all your changes that you did as mentioned in Steve’s blog above. Now, add a class to your MVC project and call it say, MyControllerFactory and let this inherit from DefaultControllerFactory class. (Read this for details on the DefaultControllerFactory class is and how it is used in a different context). Add an override for the CreateController() method and for the sake of this blog, just copy the same content from the DefaultControllerFactory class. The last step is to tell your MVC app to use the MyControllerFactory class instead of the default one. To do this, go to your Global.asax.cs file and add line 6 of the snippet below: 1: protected void Application_Start() 2: { 3: AreaRegistration.RegisterAllAreas(); 4:   5: RegisterRoutes(RouteTable.Routes); 6: ControllerBuilder.Current.SetControllerFactory(new MyControllerFactory()); 7: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now, you’re ready to reproduce the issue. Just F5 the project and when you hit the overridden CreateController() method for the second time, this is what it looks like for me: And continuing further gives me the same exception. I believe this is something that MS should fix, as not having ‘favicon.ico’ file will be common for most of the applications. So I think the when you create an MVC project, line 6 should be added by default by Visual Studio itself: 1: public class MvcApplication : System.Web.HttpApplication 2: { 3: public static void RegisterRoutes(RouteCollection routes) 4: { 5: routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); 6: routes.IgnoreRoute("favicon.ico"); 7:   8: routes.MapRoute( 9: "Default", // Route name 10: "{controller}/{action}/{id}", // URL with parameters 11: new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults 12: ); 13: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } There it is, that’s the solution to avoid the exception altogether. I tried this both IE8 and Firefox browsers and was able to successfully reproduce the error. Hope someone will look at this issue and find a fix. Just before I finish up, I found another ‘bug’, if you want to call it, with Visual Studio 2008. Remember how you could change what browser you want your application to run in by just right clicking on the .aspx file and choosing ‘Browse with…’? Seems like that’s missing when you’re working with an MVC project. In order to test the above bug in the other browser, I had to load a classic ASP.NET project, change the settings and then run my MVC project. Felt kinda ‘icky’, for lack of a better word.

    Read the article

  • Connecting to a WSE 3.0 Web Service From a WCF Client

    - by Dave
    I'm having difficulty connecting to a 3rd party WSE 3.0 web service from a WCF client. I have implemented the custom binding class as indicated in this KB article: http://msdn.microsoft.com/en-us/library/ms734745.aspx The problem seems to have to do with the security assertion used by the web service - UsernameOverTransport. When I attempt to call a method, I get the following exception: System.InvalidOperationException: The 'WseHttpBinding'.'[namespace]' binding for the 'MyWebServiceSoap'.'[namespace]' contract is configured with an authentication mode that requires transport level integrity and confidentiality. However the transport cannot provide integrity and confidentiality.. It is expecting a username, password, and CN number. In the example code supplied to us by the vendor, these credentials are bundled in a Microsoft.Web.Services3.Security.Tokens.UsernameToken. Here's the example supplied by the vendor: MyWebServiceWse proxy = new MyWebServiceWse(); UsernameToken token = new UsernameToken("Username", "password", PasswordOption.SendPlainText); token.Id = "<supplied CN Number>"; proxy.SetClientCredential(token); proxy.SetPolicy(new Policy(new UsernameOverTransportAssertion(), new RequireActionHeaderAssertion())); MyObject mo = proxy.MyMethod(); This works fine from a 2.0 app w/ WSE 3.0 installed. Here is a snippet of the code from my WCF client: EndpointAddress address = new EndpointAddress(new Uri("<web service uri here>")); WseHttpBinding binding = new WseHttpBinding(); // This is the custom binding I created per the MS KB article binding.SecurityAssertion = WseSecurityAssertion.UsernameOverTransport; binding.EstablishSecurityContext = false; // Not sure about the value of either of these next two binding.RequireDerivedKeys = true; binding.MessageProtectionOrder = MessageProtectionOrder.SignBeforeEncrypt; MembershipServiceSoapClient proxy = new MembershipServiceSoapClient(binding, address); // This is where I believe the problem lies – I can’t seem to properly setup the security credentials the web service is expecting proxy.ClientCredentials.UserName.UserName = "username"; proxy.ClientCredentials.UserName.Password = "pwd"; // How do I supply the CN number? MyObject mo = proxy.MyMethod(); // this throws the exception I've scoured the web looking for an answer to this question. Some sources get me close (like the MS KB article), but I can't seem to get over the hump. Can someone help me out?

    Read the article

  • SQL SERVER – Faster SQL Server Databases and Applications – Power and Control with SafePeak Caching Options

    - by Pinal Dave
    Update: This blog post is written based on the SafePeak, which is available for free download. Today, I’d like to examine more closely one of my preferred technologies for accelerating SQL Server databases, SafePeak. Safepeak’s software provides a variety of advanced data caching options, techniques and tools to accelerate the performance and scalability of SQL Server databases and applications. I’d like to look more closely at some of these options, as some of these capabilities could help you address lagging database and performance on your systems. To better understand the available options, it is best to start by understanding the difference between the usual “Basic Caching” vs. SafePeak’s “Dynamic Caching”. Basic Caching Basic Caching (or the stale and static cache) is an ability to put the results from a query into cache for a certain period of time. It is based on TTL, or Time-to-live, and is designed to stay in cache no matter what happens to the data. For example, although the actual data can be modified due to DML commands (update/insert/delete), the cache will still hold the same obsolete query data. Meaning that with the Basic Caching is really static / stale cache.  As you can tell, this approach has its limitations. Dynamic Caching Dynamic Caching (or the non-stale cache) is an ability to put the results from a query into cache while maintaining the cache transaction awareness looking for possible data modifications. The modifications can come as a result of: DML commands (update/insert/delete), indirect modifications due to triggers on other tables, executions of stored procedures with internal DML commands complex cases of stored procedures with multiple levels of internal stored procedures logic. When data modification commands arrive, the caching system identifies the related cache items and evicts them from cache immediately. In the dynamic caching option the TTL setting still exists, although its importance is reduced, since the main factor for cache invalidation (or cache eviction) become the actual data updates commands. Now that we have a basic understanding of the differences between “basic” and “dynamic” caching, let’s dive in deeper. SafePeak: A comprehensive and versatile caching platform SafePeak comes with a wide range of caching options. Some of SafePeak’s caching options are automated, while others require manual configuration. Together they provide a complete solution for IT and Data managers to reach excellent performance acceleration and application scalability for  a wide range of business cases and applications. Automated caching of SQL Queries: Fully/semi-automated caching of all “read” SQL queries, containing any types of data, including Blobs, XMLs, Texts as well as all other standard data types. SafePeak automatically analyzes the incoming queries, categorizes them into SQL Patterns, identifying directly and indirectly accessed tables, views, functions and stored procedures; Automated caching of Stored Procedures: Fully or semi-automated caching of all read” stored procedures, including procedures with complex sub-procedure logic as well as procedures with complex dynamic SQL code. All procedures are analyzed in advance by SafePeak’s  Metadata-Learning process, their SQL schemas are parsed – resulting with a full understanding of the underlying code, objects dependencies (tables, views, functions, sub-procedures) enabling automated or semi-automated (manually review and activate by a mouse-click) cache activation, with full understanding of the transaction logic for cache real-time invalidation; Transaction aware cache: Automated cache awareness for SQL transactions (SQL and in-procs); Dynamic SQL Caching: Procedures with dynamic SQL are pre-parsed, enabling easy cache configuration, eliminating SQL Server load for parsing time and delivering high response time value even in most complicated use-cases; Fully Automated Caching: SQL Patterns (including SQL queries and stored procedures) that are categorized by SafePeak as “read and deterministic” are automatically activated for caching; Semi-Automated Caching: SQL Patterns categorized as “Read and Non deterministic” are patterns of SQL queries and stored procedures that contain reference to non-deterministic functions, like getdate(). Such SQL Patterns are reviewed by the SafePeak administrator and in usually most of them are activated manually for caching (point and click activation); Fully Dynamic Caching: Automated detection of all dependent tables in each SQL Pattern, with automated real-time eviction of the relevant cache items in the event of “write” commands (a DML or a stored procedure) to one of relevant tables. A default setting; Semi Dynamic Caching: A manual cache configuration option enabling reducing the sensitivity of specific SQL Patterns to “write” commands to certain tables/views. An optimization technique relevant for cases when the query data is either known to be static (like archive order details), or when the application sensitivity to fresh data is not critical and can be stale for short period of time (gaining better performance and reduced load); Scheduled Cache Eviction: A manual cache configuration option enabling scheduling SQL Pattern cache eviction based on certain time(s) during a day. A very useful optimization technique when (for example) certain SQL Patterns can be cached but are time sensitive. Example: “select customers that today is their birthday”, an SQL with getdate() function, which can and should be cached, but the data stays relevant only until 00:00 (midnight); Parsing Exceptions Management: Stored procedures that were not fully parsed by SafePeak (due to too complex dynamic SQL or unfamiliar syntax), are signed as “Dynamic Objects” with highest transaction safety settings (such as: Full global cache eviction, DDL Check = lock cache and check for schema changes, and more). The SafePeak solution points the user to the Dynamic Objects that are important for cache effectiveness, provides easy configuration interface, allowing you to improve cache hits and reduce cache global evictions. Usually this is the first configuration in a deployment; Overriding Settings of Stored Procedures: Override the settings of stored procedures (or other object types) for cache optimization. For example, in case a stored procedure SP1 has an “insert” into table T1, it will not be allowed to be cached. However, it is possible that T1 is just a “logging or instrumentation” table left by developers. By overriding the settings a user can allow caching of the problematic stored procedure; Advanced Cache Warm-Up: Creating an XML-based list of queries and stored procedure (with lists of parameters) for periodically automated pre-fetching and caching. An advanced tool allowing you to handle more rare but very performance sensitive queries pre-fetch them into cache allowing high performance for users’ data access; Configuration Driven by Deep SQL Analytics: All SQL queries are continuously logged and analyzed, providing users with deep SQL Analytics and Performance Monitoring. Reduce troubleshooting from days to minutes with database objects and SQL Patterns heat-map. The performance driven configuration helps you to focus on the most important settings that bring you the highest performance gains. Use of SafePeak SQL Analytics allows continuous performance monitoring and analysis, easy identification of bottlenecks of both real-time and historical data; Cloud Ready: Available for instant deployment on Amazon Web Services (AWS). As you can see, there are many options to configure SafePeak’s SQL Server database and application acceleration caching technology to best fit a lot of situations. If you’re not familiar with their technology, they offer free-trial software you can download that comes with a free “help session” to help get you started. You can access the free trial here. Also, SafePeak is available to use on Amazon Cloud. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Granular Clipboard Control in Oracle IRM

    - by martin.abrahams
    One of the main leak prevention controls that customers are looking for is clipboard control. After all, there is little point in controlling access to a document if authorised users can simply make unprotected copies by use of the cut and paste mechanism. Oddly, for such a fundamental requirement, many solutions only offer very simplistic clipboard control - and require the customer to make an awkward choice between usability and security. In many cases, clipboard control is simply an ON-OFF option. By turning the clipboard OFF, you disable one of the most valuable edit functions known to man. Try working for any length of time without copying and pasting, and you'll soon appreciate how valuable that function is. Worse, some solutions disable the clipboard completely - not just for the protected document but for all of the various applications you have open at the time. Normal service is only resumed when you close the protected document. In this way, policy enforcement bleeds out of the particular assets you need to protect and interferes with the entire user experience. On the other hand, turning the clipboard ON satisfies a fundamental usability requirement - but also makes it really easy for users to create unprotected copies of sensitive information, maliciously or otherwise. All they need to do is paste into another document. If creating unprotected copies is this simple, you have to question how much you are really gaining by applying protection at all. You may not be allowed to edit, forward, or print the protected asset, but all you need to do is create a copy and work with that instead. And that activity would not be tracked in any way. So, a simple ON-OFF control creates a real tension between usability and security. If you are only using IRM on a small scale, perhaps security can outweigh usability - the business can put up with the restriction if it only applies to a handful of important documents. But try extending protection to large numbers of documents and large user communities, and the restriction rapidly becomes really unwelcome. I am aware of one solution that takes a different tack. Rather than disable the clipboard, pasting is always permitted, but protection is automatically applied to any document that you paste into. At first glance, this sounds great - protection travels with the content. However, at any scale this model may not be so appealing once you've had to deal with support calls from users who have accidentally applied protection to documents that really don't need it - which would be all too easily done. This may help control leakage, but it also pollutes the system with documents that have policies applied with no obvious rhyme or reason, and it can seriously inconvenience the business by making non-sensitive documents difficult to access. And what policy applies if you paste some protected content into an already protected document? Which policy applies? There are no prizes for guessing that Oracle IRM takes a rather different approach. The Oracle IRM Approach Oracle IRM offers a spectrum of clipboard controls between the extremes of ON and OFF, and it leverages the classification-based rights model to give granular control that satisfies both security and usability needs. Firstly, we take it for granted that if you have EDIT rights, of course you can use the clipboard within a given document. Why would we force you to retype a piece of content that you want to move from HERE... to HERE...? If the pasted content remains in the same document, it is equally well protected whether it be at the beginning, middle, or end - or all three. So, the first point is that Oracle IRM always enables the clipboard if you have the right to edit the file. Secondly, whether we enable or disable the clipboard, we only affect the protected document. That is, you can continue to use the clipboard in the usual way for unprotected documents and applications regardless of whether the clipboard is enabled or disabled for the protected document(s). And if you have multiple protected documents open, each may have the clipboard enabled or disabled independently, according to whether you have Edit rights for each. So, even for the simplest cases - the ON-OFF cases - Oracle IRM adds value by containing the effect to the protected documents rather than to the whole desktop environment. Now to the granular options between ON and OFF. Thanks to our classification model, we can define rights that enable pasting between documents in the same classification - ie. between documents that are protected by the same policy. So, if you are working on this month's financial report and you want to pull some data from last month's report, you can simply cut and paste between the two documents. The two documents are classified the same way, subject to the same policy, so the content is equally safe in both documents. However, if you try to paste the same data into an unprotected document or a document in a different classification, you can be prevented. Thus, the control balances legitimate user requirements to allow pasting with legitimate information security concerns to keep data protected. We can take this further. You may have the right to paste between related classifications of document. So, the CFO might want to copy some financial data into a board document, where the two documents are sealed to different classifications. The CFO's rights may well allow this, as it is a reasonable thing for a CFO to want to do. But policy might prevent the CFO from copying the same data into a classification that is accessible to external parties. The above option, to copy between classifications, may be for specific classifications or open-ended. That is, your rights might enable you to go from A to B but not to C, or you might be allowed to paste to any classification subject to your EDIT rights. As for so many features of Oracle IRM, our classification-based rights model makes this type of granular control really easy to manage - you simply define that pasting is permitted between classifications A and B, but omit C. Or you might define that pasting is permitted between all classifications, but not to unprotected locations. The classification model enables millions of documents to be controlled by a few such rules. Finally, you MIGHT have the option to paste anywhere - such that unprotected copies may be created. This is rare, but a legitimate configuration for some users, some use cases, and some classifications - but not something that you have to permit simply because the alternative is too restrictive. As always, these rights are defined in user roles - so different users are subject to different clipboard controls as required in different classifications. So, where most solutions offer just two clipboard options - ON-OFF or ON-but-encrypt-everything-you-touch - Oracle IRM offers real granularity that leverages our classification model. Indeed, I believe it is the lack of a classification model that makes such granularity impractical for other IRM solutions, because the matrix of rules for controlling pasting would be impossible to manage - there are so many documents to consider, and more are being created all the time.

    Read the article

  • Redirect output logs of javax.xml.ws and com.sun.xml.ws

    - by chrisnfoneur
    I am working on a SOAP based web service, with Sun's Metro. I am facing an annoying bug, each time I send a malformed SOAP object to my web service, sun's api spam the System.out with logs like this: javax.xml.ws.WebServiceException: com.sun.istack.XMLStreamException2: org.xml.sax.SAXParseException: cvc-complex-type.4: Attribute 'type' must appear on element 'object'. at com.sun.xml.ws.util.pipe.AbstractSchemaValidationTube.doProcess(AbstractSchemaValidationTube.java:206) at com.sun.xml.ws.util.pipe.AbstractSchemaValidationTube.processRequest(AbstractSchemaValidationTube.java:175) at com.sun.xml.ws.api.pipe.Fiber.__doRun(Fiber.java:595) at com.sun.xml.ws.api.pipe.Fiber._doRun(Fiber.java:554) at com.sun.xml.ws.api.pipe.Fiber.doRun(Fiber.java:539) at com.sun.xml.ws.api.pipe.Fiber.runSync(Fiber.java:436) at com.sun.xml.ws.server.WSEndpointImpl$2.process(WSEndpointImpl.java:243) at com.sun.xml.ws.transport.http.HttpAdapter$HttpToolkit.handle(HttpAdapter.java:444) at com.sun.xml.ws.transport.http.HttpAdapter.handle(HttpAdapter.java:244) at com.sun.xml.ws.transport.http.server.WSHttpHandler.handleExchange(WSHttpHandler.java:106) at com.sun.xml.ws.transport.http.server.WSHttpHandler.handle(WSHttpHandler.java:91) at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:65) at sun.net.httpserver.AuthFilter.doFilter(AuthFilter.java:54) at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:68) at sun.net.httpserver.ServerImpl$Exchange$LinkHandler.handle(ServerImpl.java:555) at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:65) at sun.net.httpserver.ServerImpl$Exchange.run(ServerImpl.java:527) at sun.net.httpserver.ServerImpl$DefaultExecutor.execute(ServerImpl.java:119) at sun.net.httpserver.ServerImpl$Dispatcher.handle(ServerImpl.java:349) at sun.net.httpserver.ServerImpl$Dispatcher.run(ServerImpl.java:321) at java.lang.Thread.run(Thread.java:619) Caused by: com.sun.istack.XMLStreamException2: org.xml.sax.SAXParseException: cvc-complex-type.4: Attribute 'type' must appear on element 'object'. at com.sun.xml.ws.util.xml.StAXSource$1.parse(StAXSource.java:185) at com.sun.xml.ws.util.xml.StAXSource$1.parse(StAXSource.java:170) at org.apache.xerces.jaxp.validation.ValidatorHandlerImpl.validate(Unknown Source) at org.apache.xerces.jaxp.validation.ValidatorImpl.validate(Unknown Source) at javax.xml.validation.Validator.validate(Validator.java:127) at com.sun.xml.ws.util.pipe.AbstractSchemaValidationTube.doProcess(AbstractSchemaValidationTube.java:204) ... 20 more I would like to switch off this log or redirect it to my error/warn/debug.log files used by log4j. I tried to add a rule in my log4j.xml file : <category name="javax.xml.ws"> <priority value="error" /> </category> It didn't worked. So I tried the following trick: java.util.logging.Logger.getLogger("javax.xml.ws.WebServiceException").setLevel( java.util.logging.Level.OFF); it didn't worked neither. Any ideas ? It is not a big issue but it makes my catalina.log getting bigger & bigger and it's not the appropriate place for this kind of log. Chris

    Read the article

  • Windows Azure Myths

    - by BuckWoody
    Windows Azure is part of the Microsoft "stack" - the suite of software and services we offer. Because we have so many products in almost every part of technology, it's hard to know everything about all parts of what we do - even for those of us who work here. So it's no surprise that some folks are not as familiar with Windows and SQL Azure as they are, say Windows Server or XBox. As I chat with folks about a solution for a business or organization need, I put Windows Azure into the mix. I always start off with "What do you already know about Windows Azure?" so that I don't bore folks with information they already have. I some cases they've checked out the product ahead of time and have specific questions, in others they aren't as familiar, and in still others there is a fair amount of mis-information. Sometimes that's because of a marketing failure, sometimes it's hearsay, and somtetimes it's active misinformation. I thought I might lay out a few of these misconceptions. As always - do your fact-checking! Never take anyone's word alone (including mine) as gospel. Make sure you educate yourself on your options. Your company or your clients depend on you to have the right information on IT, so make sure you live up to that. Myth 1: Nobody uses Windows Azure It's true that we don't give out numbers on the amount of clients on Windows and SQL Azure. But lots of folks are here - companies you may have heard of like Boeing, NASA, Fujitsu, The City of London, Nuedesic, and many others. I deal with firms small and large that use Windows Azure for mission-critical applications, sometimes totally on Windows and/or SQL Azure, sometimes in conjunction with an on-premises system, sometimes for only a specific component in Windows Azure like storage. The interesting thing is that many sites you visit have a Windows Azure component, or are running on Windows Azure. They just don't announce it. Just like the other cloud providers, the companies have asked to be completely branded themselves - they don't want you to be aware or care that they are on Windows Azure. Sometimes that's for security, other times it's for different reasons. It's just like the web sites you visit. For the most part, they don't advertise which OS or Web Server they use. It really just shouldn't matter. The point is that they just use what works to solve a given problem. Check out a few public case studies here: https://www.windowsazure.com/en-us/home/case-studies/ Myth 2: It's only for Microsoft stuff - can't use Open Source This is the one I face the most, and am the most dismayed by. We work just fine with many open source products, including Java, NodeJS, PHP, Ruby, Python, Hadoop, and many other languages and applications. You can quickly deploy a Wordpress, Umbraco and other "kits". We have software development kits (SDK's) for iPhones, iPads, Android, Windows phones and more. We have an SDK to work with FaceBook and other social networks. In short, we play well with others. More on the languages and runtimes we support here: https://www.windowsazure.com/en-us/develop/overview/ More on the SDK's here: http://www.wadewegner.com/2011/05/windows-azure-toolkit-for-ios/, http://www.wadewegner.com/2011/08/windows-azure-toolkits-for-devices-now-with-android/, http://azuretoolkit.codeplex.com/ Myth 3: Microsoft expects me to switch everything to "the cloud" No, we don't. That would be disasterous, unless the only things you run in your company uses works perfectly in Azure. Use Windows Azure  - or any cloud for that matter - where it works. Whenever I talk to companies, I focus on two things: Something that is broken and needs to be re-architected Something you want to do that is new If something is broken, and you need new tools to scale, extend, add capacity dynamically and so on, then you can consider using Windows or SQL Azure. It can help solve problems that you have, or it may include a component you don't want to write or architect yourself. Sometimes you want to do something new, like extend your company's offerings to mobile phones, to the web, or to a social network. More info on where it works here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx Myth 4: I have to write code to use Windows and SQL Azure If Windows Azure is a PaaS - a Platform as a Service - then don't you have to write code to use it? Nope. Windows and SQL Azure are made up of various components. Some of those components allow you to write and deploy code (like Compute) and others don't. We have lots of customers using Windows Azure storage as a backup, to securely share files instead of using DropBox, to distribute videos or code or firmware, and more. Others use our High Performance Computing (HPC) offering to rent a supercomputer when they need one. You can even throw workloads at that using Excel! In addition there are lots of other components in Windows Azure you can use, from the Windows Azure Media Services to others. More here: https://www.windowsazure.com/en-us/home/scenarios/saas/ Myth 5: Windows Azure is just another form of "vendor lock-in" Windows Azure uses .NET, OSS languages and standard interfaces for the code. Sure, you're not going to take the code line-for-line and run it on a mainframe, but it's standard code that you write, and can port to something else. And the data is yours - you can bring it back whever you want. It's either in text or binary form, that you have complete control over. There are no licenses - you can "pay as you go", and when you're done, you can leave the service and take all your code, data and IP with you.   So go out there, read up, try it. Use it where it works. And don't believe everything you hear - sometimes the Internet doesn't get it all correct. :)

    Read the article

  • How to implement a SIMPLE "You typed ACB, did you mean ABC?"

    - by marcgg
    I know this is not a straight up question, so if you need me to provide more information about the scope of it, let me know. There are a bunch of questions that address almost the same issue (they are linked here), but never the exact same one with the same kind of scope and objective - at least as far as I know. Context: I have a MP3 file with ID3 tags for artist name and song title. I have two tables Artists and Songs The ID3 tags might be slightly off (e.g. Mikaell Jacksonne) I'm using ASP.NET + C# and a MSSQL database I need to synchronize the MP3s with the database. Meaning: The user launches a script The script browses through all the MP3s The script says "Is 'Mikaell Jacksonne' 'Michael Jackson' YES/NO" The user pick and we start over Examples of what the system could find: In the database... SONGS = {"This is a great song title", "This is a song title"} ARTISTS = {"Michael Jackson"} Outputs... "This is a grt song title" did you mean "This is a great song title" ? "This is song title" did you mean "This is a song title" ? "This si a song title" did you mean "This is a song title" ? "This si song a title" did you mean "This is a song title" ? "Jackson, Michael" did you mean "Michael Jackson" ? "JacksonMichael" did you mean "Michael Jackson" ? "Michael Jacksno" did you mean "Michael Jackson" ? etc. I read some documentation from this /how-do-you-implement-a-did-you-mean and this is not exactly what I need since I don't want to check an entire dictionary. I also can't really use a web service since it's depending a lot on what I already have in my database. If possible I'd also like to avoid dealing with distances and other complicated things. I could use the google api (or something similar) to do this, meaning that the script will try spell checking and test it with the database, but I feel there could be a better solution since my database might end up being really specific with weird songs and artists, making spell checking useless. I could also try something like what has been explained on this post, using Soundex for c#. Using a regular spell checker won't work because I won't be using words but names and 'titles'. So my question is: is there a relatively simple way of doing this, and if so, what is it? Any kind of help would be appreciated. Thanks!

    Read the article

  • VBA/SQL recordsets

    - by intruesiive
    The project I'm asking about is for sending an email to teachers asking what books they're using for the classes they're teaching next semester, so that the books can be ordered. I have a query that compares the course number of this upcoming semester's classes to the course numbers of historical textbook orders, pulling out only those classes that are being taught this semester. That's where I get lost. I have a table that contains the following: -Professor -Course Number -Year -Book -Title The data looks like this: professor year course number title smith 13 1111 Pride and Prejudice smith 13 1111 The Fountainhead smith 13 1222 The Alchemist smith 12 1111 Pride and Prejudice smith 11 1222 Infinite Jest smith 10 1333 The Bible smith 13 1333 The Bible smith 12 1222 The Alchemist smith 10 1111 Moby Dick johnson 12 1222 The Tipping Point johnson 11 1333 Anna Kerenina johnson 10 1333 Everything is Illuminated johnson 12 1222 The Savage Detectives johnson 11 1333 In Search of Lost Time johnson 10 1333 Great Expectations johnson 9 1222 Proust on the Shore Here's what I need the code to do "on paper": Group the records by professor. Determine every unique course number in that group, and group records by course number. For each unique course number, determine the highest year associated. Then spit out every record with that professor+course number+year combination. With the sample data, the results would be: professor year course number title smith 13 1111 Pride and Prejudice smith 13 1111 The Fountainhead smith 13 1222 The Alchemist smith 13 1333 The Bible johnson 12 1222 The Tipping Point johnson 11 1333 Anna Kerenina johnson 12 1222 The Savage Detectives johnson 11 1333 In Search of Lost Time I'm thinking I should make a record set for each teacher, and within that, another record set for each course number. Within the course number record set, I need the system to determine what the highest year number is - maybe store that in a variable? Then pull out every associated record so that if the teacher ordered 3 books the last time they taught that class (whether it was in 2013 or 2012 and so on) all three books display. I'm not sure I'm thinking of record sets in the right way, though. My SQL so far is basic and clearly doesn't work: SELECT [All].Professor, [All].Course, Max([All].Year) FROM [All] GROUP BY [All].Professor, [All].Course; Thanks for your help.

    Read the article

< Previous Page | 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807  | Next Page >