Search Results

Search found 25974 results on 1039 pages for 'source routing'.

Page 340/1039 | < Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >

  • SSIS: Building SQL databases on-the-fly using concatenated SQL scripts

    - by DrJohn
    Over the years I have developed many techniques which help automate the whole SQL Server build process. In my current process, where I need to build entire OLAP data marts on-the-fly, I make regular use of a simple but very effective mechanism to concatenate all the SQL Scripts together from my SSMS (SQL Server Management Studio) projects. This proves invaluable because in two clicks I can redeploy an entire SQL Server database with all tables, views, stored procedures etc. Indeed, I can also use the concatenated SQL scripts with SSIS to build SQL Server databases on-the-fly. You may be surprised to learn that I often redeploy the database several times per day, or even several times per hour, during the development process. This is because the deployment errors are logged and you can quickly see where SQL Scripts have object dependency errors. For example, after changing a table structure you may have forgotten to change any related views. The deployment log immediately points out all the objects which failed to build so you can fix and redeploy the database very quickly. The alternative approach (i.e. doing changes in the database directly using the SSMS UI) would require you to check all dependent objects before making changes. The chances are that you will miss something and wonder why your app returns the wrong data – a common problem caused by changing a table without re-creating dependent views. Using SQL Projects in SSMS A great many developers fail to make use of SQL Projects in SSMS (SQL Server Management Studio). To me they are invaluable way of organizing your SQL Scripts. The screenshot below shows a typical SSMS solution made up of several projects – one project for tables, another for views etc. The key point is that the projects naturally fall into the right order in file system because of the project name. The number in the folder or file name ensures that the projects the SQL scripts are concatenated together in the order that they need to be executed. Hence the script filenames start with 100, 110 etc. Concatenating SQL Scripts To concatenate the SQL Scripts together into one file, I use notepad.exe to create a simple batch file (see example screenshot) which uses the TYPE command to write the content of the SQL Script files into a combined file. As the SQL Scripts are in several folders, I simply use several TYPE command multiple times and append the output together. If you are unfamiliar with batch files, you may not know that the angled bracket (>) means write output of the program into a file. Two angled brackets (>>) means append output of this program into a file. So the command-line DIR > filelist.txt would write the content of the DIR command into a file called filelist.txt. In the example shown above, the concatenated file is called SB_DDS.sql If, like me you place the concatenated file under source code control, then the source code control system will change the file's attribute to "read-only" which in turn would cause the TYPE command to fail. The ATTRIB command can be used to remove the read-only flag. Using SQLCmd to execute the concatenated file Now that the SQL Scripts are all in one big file, we can execute the script against a database using SQLCmd using another batch file as shown below: SQLCmd has numerous options, but the script shown above simply executes the SS_DDS.sql file against the SB_DDS_DB database on the local machine and logs the errors to a file called SB_DDS.log. So after executing the batch file you can simply check the error log to see if your database built without a hitch. If you have errors, then simply fix the source files, re-create the concatenated file and re-run the SQLCmd to rebuild the database. This two click operation allows you to quickly identify and fix errors in your entire database definition.Using SSIS to execute the concatenated file To execute the concatenated SQL script using SSIS, you simply drop an Execute SQL task into your package and set the database connection as normal and then select File Connection as the SQLSourceType (as shown below). Create a file connection to your concatenated SQL script and you are ready to go.   Tips and TricksAdd a new-line at end of every fileThe most common problem encountered with this approach is that the GO statement on the last line of one file is placed on the same line as the comment at the top of the next file by the TYPE command. The easy fix to this is to ensure all your files have a new-line at the end.Remove all USE database statementsThe SQLCmd identifies which database the script should be run against.  So you should remove all USE database commands from your scripts - otherwise you may get unintentional side effects!!Do the Create Database separatelyIf you are using SSIS to create the database as well as create the objects and populate the database, then invoke the CREATE DATABASE command against the master database using a separate package before calling the package that executes the concatenated SQL script.    

    Read the article

  • The Arab HEUG is now a reality, and other random thoughts

    - by user9147039
    I just returned from Doha, Qatar where the first of its kind HEUG (Higher Education User Group) meeting for institutions in the Middle East and North Africa was held at Qatar University and jointly hosted by Damman University from Saudi Arabia. Over 80 delegates attended including representation from education institutions in Oman, Saudi Arabia, Lebanon, and Qatar. There are many other regional HEUG organizations in place (in Australia/New Zealand, APAC, EMEA, as well as smaller regional HEUG’s in the Netherlands, South Africa, and in regions of the US), but it was truly an accomplishment to see this Middle East/North Africa group organize and launch their chapter with a meeting of this quality. To be known as the Arab HEUG going forward, I am excited about the prospects for sharing between the institutions and for the growth of Oracle solutions in the region. In particular the hosts for the event (Qatar University) did a masterful job with logistics and organization, and the quality of the event was a testament to their capabilities. Among the more interesting and enlightening presentations I attended were one from Dammam University on the lessons learned from their implementation of Campus Solutions and transition off of Banner, as well as the use by Qatar University E-business Suite for grants management (both pre-and post-award). The most notable fact coming from this latter presentation was the fit (89%) of e-Business Suite Grants to the university’s requirements. In a few weeks time we will be convening the 5th meeting of the Oracle Education & Research Industry Strategy Council in Redwood Shores (5th since my advent into my current role). The main topics of discussion will be around our Higher Education Applications Strategy for the future (including cloud approaches to ERP (HCM, Finance, and Student Information Systems), how some cases studies on the benefits of leveraging delivered functionality and extensibility in the software (versus customization). On the second day of the event we will turn our attention to Oracle in Research and also budgeting and planning in higher education. Both of these sessions will include significant participation from council members in the form of panel discussions. Our EVP’s for Systems (John Fowler) and for Global Cloud Services and North America application sales (Joanne Olson) will join us for the discussion. I recently read a couple of articles that were surprising to me. The first was from Inside Higher Ed on October 15 entitled, “As colleges prepare for major software upgrades, Kuali tries to woo them from corporate vendors.” It continues to disappointment that after all this time we are still debating whether it is better to build enterprise software through open or community source initiatives when fully functional, flexible, supported, and widely adopted options exist in the marketplace. Over a decade or more ago when these solutions were relatively immature and there was a great deal of turnover in the market I could appreciate the initiatives like Kuali. But let’s not kid ourselves – the real objective of this movement is to counter a perceived predatory commercial software industry. Again, when commercial solutions are deployed as written without significant customization, and standard business processes are adopted, the cost of these solutions (relative to the value delivered) is quite low, and certain much lower than the massive investment (and risk) in in-house developers to support a bespoke community source system. In this era of cost pressures in education and the need to refocus resources on teaching, learning, and research, I believe it’s bordering on irresponsible to continue to pursue open-source ERP. Many of the adopter’s total costs are staggering and have little to show for their efforts and expended resources. The second article was recently in the Chronicle of Higher Education and was entitled “’Big Data’ Is Bunk, Obama Campaign’s Tech Guru Tells University Leaders.” This one was so outrageous I almost don’t want to legitimize it by referencing it here. In the article the writer relays statements made by Harper Reed, President Obama’s former CTO for his 2012 re-election campaign, that big data solutions in education have no relevance and are akin to snake oil. He goes on to state that while he’s a fan of data-driven decision making in education, most of the necessary analysis can be accomplished in Excel spreadsheets. Yeah… right. This is exactly what ails education (higher education in particular). Dozens of shadow and siloed systems running on spreadsheets with limited-to-no enterprise wide initiatives to harness the data-rich environment that is a higher ed institution and transform the data into useable information. I’ll grant Mr. Reed that “Big Data” is overused and hackneyed, but imperatives like improving student success in higher education are classic big data problems that data-mining and predictive analytics can address. Further, higher ed need to be producing a massive amount more data scientists and analysts than are currently in the pipeline, to further this discipline and application of these tools to many many other problems across multiple industries.

    Read the article

  • Skype support and dead parrot sketches

    - by Greg Low
    We as an industry have a lot to answer for. One thing is the level of support provided for users. Here's the conversation I'm having with Skype right now. It feels like being in a Monty Python skit. I was really hoping that when Microsoft purchased Skype that things might improve. 8:33:44 PM greglowInitial Question/Comment: Subscriptions (Unlimited Country, Unlimited Europe, Unlimited Region, Unlimited World)8:33:44 PM SystemThank you for contacting Skype Customer Support!8:33:49 PM SystemDonna Faye has joined this session!8:33:50 PM SystemConnected with Donna Faye8:33:50 PM SystemThank you for contacting Skype Customer Support!8:33:54 PM SystemPlease hold for the next available Live Support Agent.8:33:59 PM Donna FayeHello! Welcome to Skype Live Support! My name is Donna L. How may I help you?8:34:09 PM greglowI've got a subscription (skypename greglow)8:34:21 PM greglowIt's set to auto-renew8:34:45 PM greglowWhy have I been getting messages about my Skype number expiring, if it's set to renew automatically8:34:54 PM greglowand has recently renewed automatically8:35:09 PM Donna FayeThank you for the information.8:35:29 PM Donna FayeTo better assist you may I know your Skype name, please?8:35:37 PM greglowgreglow8:35:44 PM Donna FayeThank you.8:36:14 PM Donna FayeI understand your concern that you receive an email notification stating that your calling subscription will about to expire. Am I right?8:36:43 PM greglowNo, I got an email telling me that one of my Skype numbers was going to expire8:37:08 PM greglowWhen I went to the website, it said it was part of the subscription and the subscription was set to auto-renew8:37:18 PM greglowThe subscription auto-renewed on Oct XXth8:37:24 PM greglowBut the number still expired8:37:43 PM greglowWhen I logged on today, it said that the number had expired and that I had 52 days left to reactivate it8:37:58 PM greglowI did that but am trying to understand why that happens at all8:38:36 PM greglowWhy do you expire numbers that are part of auto-renewing subscriptions?8:39:49 PM Donna FayeUpon checking your account, your calling subscription link to three Online Numbers.8:39:55 PM greglowcorrect8:40:10 PM greglowbut until an hour or so ago, one of them had expired. why?8:40:54 PM Donna FayeLet me explain to you that now Online Number are detach to a calling subscription unlike before it was attach to the calling subscription so therefore each Online Number will recur individually.8:41:29 PM Donna FayeUpon checking your account it shows that all your Online Number will expire on October XX, 20138:41:44 PM Donna FayeIf you don't want to be charged again for you can cancel it anytime.8:41:59 PM greglowI have set it to charge automatically8:42:06 PM greglowso why do the numbers expire?8:42:32 PM Donna FayeNo they are not expired they are all active.8:42:33 PM greglowif there is an automatic payment set up, why does anything expire?8:42:52 PM greglowyes, but one of them was expired, and I only fixed it a few hours ago8:43:07 PM greglowI'm trying to understand why it expired8:44:54 PM Donna FayeThe email you got just notify you that you should have good enough funding source of your calling subscription and Online Number in order for this to recur.8:45:12 PM greglowAgreed. I did that but the number still expired. Why?8:45:38 PM Donna FayeHowever do not worry on this hence all Online Number are active and not expired.8:45:41 PM greglowWhen I logged on today it said that the number had expired and that I had 52 days left to reactivate it. Why did that happen?8:46:14 PM Donna FayeMay I know the Online Number you are pertaining, please?The number that had expired was (07) XXXX XXXX (Australia)8:49:19 PM Donna FayeOkay, do not worry on this Online Number because this will expire on October XX, 2013.8:49:38 PM Donna FayeThank you for all the information.8:49:46 PM greglowI understand that it won't expire now until next year. I'm trying to understand why it stopped working this time8:49:51 PM greglowso I can avoid that next time8:50:39 PM Donna FayeWhat do yo stop working, you mean that Online Number is unable to be reach out?8:50:59 PM Donna Faye*what do you mean by stop working8:51:18 PM greglowThe Skype number +61 7 XXXX XXXX stopped working because it expired8:51:32 PM Donna FayeNo, this into expired.8:51:34 PM greglowI'm trying to understand why it expired8:51:50 PM greglowSorry I don't follow "this into expired"8:52:39 PM Donna FayeLet me inform you that this Online Number is not expired.8:52:54 PM Donna FayePlease disregard the notification you see on your Skype account.8:53:02 PM greglowSorry, this is getting very frustrating. I already told you I fixed it today. I want to know why it happened.8:53:20 PM Donna FayeHence I can rest assured you that this is active until October XX, 2013.8:53:21 PM greglowSo I can avoid it happening again, on this account, or on our other accounts8:53:34 PM greglowCan I speak to someone who really understands this please?Donna FayeSkype send a notification to Skype users for their Skype product in order for them to be aware the expiration of your product however your Online Number still active on your account hence when your calling subscription Unlimited World recur your Online Number also recur. 8:57:59 PM Donna Faye I do apologize for this matter, please allow me to resolve this issue for you. 9:02:48 PM greglow My question is still the same: if it's set to auto-pay, why did it expire? 9:03:50 PM greglow Why did it stop working? Why did I have to "reactivate" it? 9:04:08 PM greglow Why didn't it just keep working if it's set to auto-pay? 9:04:30 PM greglow This isn't a difficult concept 9:04:34 PM Donna Faye Let me clarify to you that this is not the really expired what is emphasizing on this is when you don't have good enough funding source when you calling subscription recur the Online Number will possibly expired but seem that you have a good funding source then this will continue until 2013. 9:05:10 PM greglow When I logged on today, it was not working, and your website said it had expired and that I had 52 days left to reactivate it 9:05:17 PM greglow Why?and so on, and so on, and so on.... 

    Read the article

  • First Foray&ndash;About timeout

    - by SQLMonger
    It has been quite a while since I signed up for this blog site and high time that something was posted.  I have a list of topics that I will be working through and posting.  Some I am sure will have been posted by others, but I will be sticking to the technical problems and challenges that I’ve recently faced, and the solutions that worked for me.  My motto when learning something new has always been “My kingdom for an example!”, and I plan on delivering useful examples here so others can learn from my efforts, failures and successes.   A bit of background about me… My name is Clayton Groom. I am a founding partner of a consulting firm in St. Louis Missouri, Covenant Technology Partners, LLC and focus on SQL Server Data Warehouse design, Analysis Services and Enterprise Reporting solutions.  I have been working with SQL Server since the early nineties, when it still only ran on OS/2. I love solving puzzles and technical challenges.   Enough about me… On to a real problem… SSIS Connection Time outs versus Command Time outs Last week, I was working on automating the processing for a large Analysis Services cube.  I had reworked an SSIS package and script task originally posted by Vidas Matelis that automates the process of adding new and dropping old partitions to/from an Analysis Services cube.  I had the package working great, tested, and ready for deployment.  It basically performs a query against the source system to determine if there is new data in the warehouse that will require a new partition to be added to the cube, and it checks the cube to see if there are any partitions that are present that are no longer needed in a rolling 60 month window. My client uses Tivoli for running all their production jobs, and not SQL Agent, so I had to build a command line file for Tivoli to use to run the package. Everything was going great. I had tested the command file from my development workstation using an XML configuration file to pass in server-specific parameters into the package when executed using the DTExec utility. With all the pieces ready, I updated the dtsconfig file to point to the UAT environment and started working with the Tivoli developer to test the job.  On the first run, the job failed, and from what I could see in the SSIS log, it had failed because of a timeout. Other errors in the log made me think that perhaps the connection string had not been passed into the package correctly. We bumped the Connection Manager  timeout values from 20 seconds to 120 seconds and tried again. The job still failed. After changing the command line to use the /SET option instead of the /CONFIGFILE option, we tested again, and again failure. After a number more failed attempts, and getting the Teradata DBA involved to monitor and see if we were connecting and failing or just failing to connect, we determined that the job was indeed connecting to the server and then disconnecting itself after 30 seconds.  This seemed odd, as we had the timeout values for the connection manager set to 180 seconds by then.  At this point one of the DBA’s found a post on the Teradata forum that had the clues to the puzzle: There is a separate “CommandTimeout” custom property on the Data source object that may needed to be adjusted for longer running queries.  I opened up the SSIS package, opened the data flow task that generated the partition list table and right-clicked on the data source. from the context menu, I selected “Show Advanced Editor” and found the property. Sure enough, it was set to 30 seconds. The CommandTimeout property can also be edited in the SSIS Properties sheet. In order to determine how long the timeout needed to be, I ran the query from the task in the development environment and received a response in a matter of seconds.  I then tried the same query against the production database and waited several minutes for a response. This did not seem to be a reasonable response time for the query involved, and indeed it wasn’t. The Teradata DBA’s adjusted the query governor settings for the service account I was testing with, and we were able to get the response back down under a minute.  Still, I set the CommandTimeout property to a much higher value in case the job was ever started during a time of high-demand on the production server. With this change in place, the job finally completed successfully.  The lesson learned for me was two-fold: Always compare query execution times between development and production environments, and don’t assume that production will always be faster.  With higher user demands, query governors, and a whole lot more data, the execution time of even what might seem to be simple queries can vary greatly. SSIS Connection time out settings do not affect command time outs.  Connection timeouts control how long the package will wait for a response from the server before assuming the server is not available or is not responding. Command time outs control how long a task will wait for results to start being returned before deciding that the server is not responding. Both lessons seem pretty straight forward, and I felt pretty sheepish once I finally figured out what the issue was.  To be fair though, In the 5+ years that I have been working with SSIS, I could only recall one other time where I had to set the CommandTimeout property, and that memory only resurfaced while I was penning this post.

    Read the article

  • Creating the Business Card Request InfoPath Form

    - by JKenderdine
    Business Card Request Demo Files Back in January I spoke at SharePoint Saturday Virginia Beach about InfoPath forms and Web Part deployment.  Below is some of the information and details regarding the form I created for the session.  There are many blogs and Microsoft articles on how to create a basic form so I won’t repeat that information here.   This blog will just explain a few of the options I chose when creating the solutions for SPS Virginia Beach.  The above link contains the zipped package files of the two InfoPath forms(no code solution and coded solution), the list template for the Location list I used, and the PowerPoint deck.  If you plan to use these templates, you will need to update the forms to work within your own environments (change data connections, code links, etc.).  Also, you must have the SharePoint Enterprise version, with InfoPath Services configured in order to use the Web Browser enabled forms. So what are the requirements for this template? Business Card Request Form Template Design Plan: Gather user information and requirements for card Pull in as much user information as possible. Use data from the user profile web services as a data source Show and hide fields as necessary for requirements Create multiple views – one for those submitting the form and Another view for the executive assistants placing the orders. Browser based form integrated into SharePoint team site Submitted directly to form library The base form was created using the blank template.  The table and rows were added using Insert tab and selecting Custom Table.  The use of tables is a great way to make sure everything lines up.  You do have to split the tables from time to time.  If you’ve ever split cells and then tried to re-align one to find that you impacted the others, you know why.  Here is what the base form looks like in InfoPath.   Show and hide fields as necessary for requirements You will notice I also used Sections within the form.  These show or hide depending on options selected or whether or not fields are blank.  This is a great way to prevent your users from feeling overwhelmed with a large form (this one wouldn’t apply).  Although not used in this one, you can also use various views with a tab interface.  I’ll show that in another post. Gather user information and requirements for card Pull in as much user information as possible. Use data from the user profile web services as a data source Utilizing rules you can load data when the form initiates (Data tab, Form Load).  Anything you can automate is always appreciated by the user as that is data they don’t have to enter.  For example, loading their user id or other user information on load: Always keep in mind though how much data you load and the method for loading that data (through rules, code, etc.).  They have an impact on form performance.  The form will take longer to load if you bring in a ton of data from external sources.  Laura Rogers has a great blog post on using the User Information List to load user information.   If the user has logged into SharePoint, then this can be used quite effectively and without a huge performance hit.   What I have found is that using the User Profile service via code behind or the Web Service “GetUserProfileByName” (as above) can take more time to load the user data.  Just food for thought. You must add the data connection in order for the above rules to work.  You can connect to the data connection through the Data tab, Data Connections or select Manage Data Connections link which appears under the main data source.  The data connections can be SharePoint lists or libraries, SQL data tables, XML files, etc.  Create multiple views – one for those submitting the form and Another view for the executive assistants placing the orders. You can also create multiple views for the users to enhance their experience.  Once they’ve entered the information and submitted their request for business cards, they don’t really need to see the main data input screen any more.  They just need to view what they entered. From the Page Design tab, select New View and give the view a name.  To review the existing views, click the down arrow under View: The ReviewView shows just what the user needs and nothing more: Once you have everything configured, the form should be tested within a Test SharePoint environment before final deployment to production.  This validates you don’t have any rules or code that could impact the server negatively. Submitted directly to form library   You will need to know the form library that you will be submitting to when publishing the template.  Configure the Submit data connection to connect to this library.  There is already one configured in the sample,  but it will need to be updated to your environment prior to publishing. The Design template is different from the Published template.  While both have the .XSN extension, the published template contains all the “package” information for the form.  The published form is what is loaded into Central Admin, not the design template. Browser based form integrated into SharePoint team site In Central Admin, under General Settings, select Manage Form Templates.  Upload the published form template and Activate it to a site collection. Now it is available as a content type to select in the form library.  Some documentation on publishing form templates:  Technet – Manage administrator approved form templates And that’s all our base requirements.  Hope this helps to give a good start.

    Read the article

  • LIBGDX "parsing error emitter" with 2 or more emitters [on hold]

    - by flow969
    I have a problem with the use of particle effect of LIBGDX with 2 or more emitters. After using ParticleEditor to create my .p file, I use it in my code BUT...when I use only 1 emitter it's fine but with more than 1, not fine ! :( Here is my error code in java console : Exception in thread "LWJGL Application" java.lang.RuntimeException: Error parsing emitter: - Delay - at com.badlogic.gdx.graphics.g2d.ParticleEmitter.load(ParticleEmitter.java:910) at com.badlogic.gdx.graphics.g2d.ParticleEmitter.<init>(ParticleEmitter.java:95) at com.badlogic.gdx.graphics.g2d.ParticleEffect.loadEmitters(ParticleEffect.java:154) at com.badlogic.gdx.graphics.g2d.ParticleEffect.load(ParticleEffect.java:138) at com.fasgame.fishtrip.android.screens.GameScreen.show(GameScreen.java:313) at com.badlogic.gdx.Game.setScreen(Game.java:61) at com.fasgame.fishtrip.android.screens.MainMenuScreen.render(MainMenuScreen.java:71) at com.badlogic.gdx.Game.render(Game.java:46) at com.badlogic.gdx.backends.lwjgl.LwjglApplication.mainLoop(LwjglApplication.java:206) at com.badlogic.gdx.backends.lwjgl.LwjglApplication$1.run(LwjglApplication.java:114) Caused by: java.lang.NumberFormatException: For input string: "- Count -" at sun.misc.FloatingDecimal.readJavaFormatString(Unknown Source) at sun.misc.FloatingDecimal.parseFloat(Unknown Source) at java.lang.Float.parseFloat(Unknown Source) at com.badlogic.gdx.graphics.g2d.ParticleEmitter.readFloat(ParticleEmitter.java:929) at com.badlogic.gdx.graphics.g2d.ParticleEmitter$RangedNumericValue.load(ParticleEmitter.java:1062) at com.badlogic.gdx.graphics.g2d.ParticleEmitter.load(ParticleEmitter.java:866) ... 9 more And here is my particle effect .p file : Blanc - Delay - active: false - Duration - lowMin: 3000.0 lowMax: 3000.0 - Count - min: 0 max: 200 - Emission - lowMin: 0.0 lowMax: 0.0 highMin: 250.0 highMax: 250.0 relative: false scalingCount: 1 scaling0: 1.0 timelineCount: 1 timeline0: 0.0 - Life - lowMin: 500.0 lowMax: 500.0 highMin: 500.0 highMax: 500.0 relative: false scalingCount: 3 scaling0: 1.0 scaling1: 0.47058824 scaling2: 0.0 timelineCount: 3 timeline0: 0.0 timeline1: 0.51369864 timeline2: 1.0 - Life Offset - active: false - X Offset - active: false - Y Offset - active: false - Spawn Shape - shape: point - Spawn Width - lowMin: 0.0 lowMax: 0.0 highMin: 0.0 highMax: 0.0 relative: false scalingCount: 1 scaling0: 1.0 timelineCount: 1 timeline0: 0.0 - Spawn Height - lowMin: 0.0 lowMax: 0.0 highMin: 0.0 highMax: 0.0 relative: false scalingCount: 1 scaling0: 1.0 timelineCount: 1 timeline0: 0.0 - Scale - lowMin: 0.0 lowMax: 0.0 highMin: 70.0 highMax: 70.0 relative: true scalingCount: 2 scaling0: 1.0 scaling1: 0.0 timelineCount: 2 timeline0: 0.0 timeline1: 1.0 - Velocity - active: true lowMin: 0.0 lowMax: 0.0 highMin: 30.0 highMax: 300.0 relative: false scalingCount: 1 scaling0: 1.0 timelineCount: 1 timeline0: 0.0 - Angle - active: true lowMin: 220.0 lowMax: 320.0 highMin: 220.0 highMax: 320.0 relative: false scalingCount: 2 scaling0: 0.0 scaling1: 0.98039216 timelineCount: 2 timeline0: 0.0 timeline1: 1.0 - Rotation - active: false - Wind - active: false - Gravity - active: true lowMin: 0.0 lowMax: 0.0 highMin: 0.0 highMax: 0.0 relative: false scalingCount: 1 scaling0: 1.0 timelineCount: 1 timeline0: 0.0 - Tint - colorsCount: 3 colors0: 0.50980395 colors1: 0.7647059 colors2: 0.7921569 timelineCount: 1 timeline0: 0.0 - Transparency - lowMin: 0.0 lowMax: 0.0 highMin: 1.0 highMax: 1.0 relative: false scalingCount: 4 scaling0: 1.0 scaling1: 1.0 scaling2: 1.0 scaling3: 1.0 timelineCount: 4 timeline0: 0.0 timeline1: 0.36301368 timeline2: 0.6164383 timeline3: 1.0 - Options - attached: false continuous: true aligned: false additive: true behind: false premultipliedAlpha: false pre_particle.png Bleu - Delay - active: false - Duration - lowMin: 3000.0 lowMax: 3000.0 - Count - min: 0 max: 200 - Emission - lowMin: 0.0 lowMax: 0.0 highMin: 250.0 highMax: 250.0 relative: false scalingCount: 1 scaling0: 1.0 timelineCount: 1 timeline0: 0.0 - Life - lowMin: 500.0 lowMax: 500.0 highMin: 500.0 highMax: 500.0 relative: false scalingCount: 3 scaling0: 1.0 scaling1: 0.47058824 scaling2: 0.0 timelineCount: 3 timeline0: 0.0 timeline1: 0.51369864 timeline2: 1.0 - Life Offset - active: false - X Offset - active: false - Y Offset - active: false - Spawn Shape - shape: point - Spawn Width - lowMin: 0.0 lowMax: 0.0 highMin: 0.0 highMax: 0.0 relative: false scalingCount: 1 scaling0: 1.0 timelineCount: 1 timeline0: 0.0 - Spawn Height - lowMin: 0.0 lowMax: 0.0 highMin: 0.0 highMax: 0.0 relative: false scalingCount: 1 scaling0: 1.0 timelineCount: 1 timeline0: 0.0 - Scale - lowMin: 0.0 lowMax: 0.0 highMin: 70.0 highMax: 70.0 relative: true scalingCount: 2 scaling0: 1.0 scaling1: 0.0 timelineCount: 2 timeline0: 0.0 timeline1: 1.0 - Velocity - active: true lowMin: 0.0 lowMax: 0.0 highMin: 30.0 highMax: 300.0 relative: false scalingCount: 1 scaling0: 1.0 timelineCount: 1 timeline0: 0.0 - Angle - active: true lowMin: 220.0 lowMax: 320.0 highMin: 220.0 highMax: 320.0 relative: false scalingCount: 2 scaling0: 0.0 scaling1: 0.98039216 timelineCount: 2 timeline0: 0.0 timeline1: 1.0 - Rotation - active: false - Wind - active: false - Gravity - active: true lowMin: 0.0 lowMax: 0.0 highMin: 0.0 highMax: 0.0 relative: false scalingCount: 1 scaling0: 1.0 timelineCount: 1 timeline0: 0.0 - Tint - colorsCount: 3 colors0: 0.0 colors1: 0.7254902 colors2: 0.7921569 timelineCount: 1 timeline0: 0.0 - Transparency - lowMin: 0.0 lowMax: 0.0 highMin: 1.0 highMax: 1.0 relative: false scalingCount: 6 scaling0: 0.0 scaling1: 1.0 scaling2: 1.0 scaling3: 1.0 scaling4: 1.0 scaling5: 0.0 timelineCount: 6 timeline0: 0.0 timeline1: 0.047945205 timeline2: 0.34246576 timeline3: 0.6712329 timeline4: 0.94520545 timeline5: 1.0 - Options - attached: false continuous: true aligned: false additive: true behind: false premultipliedAlpha: false pre_particle.png BleuFonce - Delay - active: false - Duration - lowMin: 3000.0 lowMax: 3000.0 - Count - min: 0 max: 200 - Emission - lowMin: 0.0 lowMax: 0.0 highMin: 250.0 highMax: 250.0 relative: false scalingCount: 1 scaling0: 1.0 timelineCount: 1 timeline0: 0.0 - Life - lowMin: 500.0 lowMax: 500.0 highMin: 500.0 highMax: 500.0 relative: false scalingCount: 3 scaling0: 1.0 scaling1: 0.47058824 scaling2: 0.0 timelineCount: 3 timeline0: 0.0 timeline1: 0.51369864 timeline2: 1.0 - Life Offset - active: false - X Offset - active: false - Y Offset - active: false - Spawn Shape - shape: point - Spawn Width - lowMin: 0.0 lowMax: 0.0 highMin: 0.0 highMax: 0.0 relative: false scalingCount: 1 scaling0: 1.0 timelineCount: 1 timeline0: 0.0 - Spawn Height - lowMin: 0.0 lowMax: 0.0 highMin: 0.0 highMax: 0.0 relative: false scalingCount: 1 scaling0: 1.0 timelineCount: 1 timeline0: 0.0 - Scale - lowMin: 0.0 lowMax: 0.0 highMin: 70.0 highMax: 70.0 relative: true scalingCount: 2 scaling0: 1.0 scaling1: 0.0 timelineCount: 2 timeline0: 0.0 timeline1: 1.0 - Velocity - active: true lowMin: 0.0 lowMax: 0.0 highMin: 30.0 highMax: 300.0 relative: false scalingCount: 1 scaling0: 1.0 timelineCount: 1 timeline0: 0.0 - Angle - active: true lowMin: 220.0 lowMax: 320.0 highMin: 220.0 highMax: 320.0 relative: false scalingCount: 2 scaling0: 0.0 scaling1: 0.98039216 timelineCount: 2 timeline0: 0.0 timeline1: 1.0 - Rotation - active: false - Wind - active: false - Gravity - active: true lowMin: 0.0 lowMax: 0.0 highMin: 0.0 highMax: 0.0 relative: false scalingCount: 1 scaling0: 1.0 timelineCount: 1 timeline0: 0.0 - Tint - colorsCount: 3 colors0: 0.0 colors1: 0.7294118 colors2: 1.0 timelineCount: 1 timeline0: 0.0 - Transparency - lowMin: 0.0 lowMax: 0.0 highMin: 1.0 highMax: 1.0 relative: false scalingCount: 4 scaling0: 1.0 scaling1: 0.0 scaling2: 0.0 scaling3: 1.0 timelineCount: 4 timeline0: 0.0 timeline1: 0.001 timeline2: 0.5753425 timeline3: 0.79452056 - Options - attached: false continuous: true aligned: false additive: true behind: false premultipliedAlpha: false pre_particle.png For the "- Image Path -" missing it's normal if I let them in it doesn't work even with only 1 emitter PS : I've already updated my lib to the last release

    Read the article

  • MERGE gives better OUTPUT options

    - by Rob Farley
    MERGE is very cool. There are a ton of useful things about it – mostly around the fact that you can implement a ton of change against a table all at once. This is great for data warehousing, handling changes made to relational databases by applications, all kinds of things. One of the more subtle things about MERGE is the power of the OUTPUT clause. Useful for logging.   If you’re not familiar with the OUTPUT clause, you really should be – it basically makes your DML (INSERT/DELETE/UPDATE/MERGE) statement return data back to you. This is a great way of returning identity values from INSERT commands (so much better than SCOPE_IDENTITY() or the older (and worse) @@IDENTITY, because you can get lots of rows back). You can even use it to grab default values that are set using non-deterministic functions like NEWID() – things you couldn’t normally get back without running another query (or with a trigger, I guess, but that’s not pretty). That inserted table I referenced – that’s part of the ‘behind-the-scenes’ work that goes on with all DML changes. When you insert data, this internal table called inserted gets populated with rows, and then used to inflict the appropriate inserts on the various structures that store data (HoBTs – the Heaps or B-Trees used to store data as tables and indexes). When deleting, the deleted table gets populated. Updates get a matching row in both tables (although this doesn’t mean that an update is a delete followed by an inserted, it’s just the way it’s handled with these tables). These tables can be referenced by the OUTPUT clause, which can show you the before and after for any DML statement. Useful stuff. MERGE is slightly different though. With MERGE, you get a mix of entries. Your MERGE statement might be doing some INSERTs, some UPDATEs and some DELETEs. One of the most common examples of MERGE is to perform an UPSERT command, where data is updated if it already exists, or inserted if it’s new. And in a single operation too. Here, you can see the usefulness of the deleted and inserted tables, which clearly reflect the type of operation (but then again, MERGE lets you use an extra column called $action to show this). (Don’t worry about the fact that I turned on IDENTITY_INSERT, that’s just so that I could insert the values) One of the things I love about MERGE is that it feels almost cursor-like – the UPDATE bit feels like “WHERE CURRENT OF …”, and the INSERT bit feels like a single-row insert. And it is – but into the inserted and deleted tables. The operations to maintain the HoBTs are still done using the whole set of changes, which is very cool. And $action – very convenient. But as cool as $action is, that’s not the point of my post. If it were, I hope you’d all be disappointed, as you can’t really go near the MERGE statement without learning about it. The subtle thing that I love about MERGE with OUTPUT is that you can hook into more than just inserted and deleted. Did you notice in my earlier query that my source table had a ‘src’ field, that wasn’t used in the insert? Normally, this would be somewhat pointless to include in my source query. But with MERGE, I can put that in the OUTPUT clause. This is useful stuff, particularly when you’re needing to audit the changes. Suppose your query involved consolidating data from a number of sources, but you didn’t need to insert that into the actual table, just into a table for audit. This is now very doable, either using the INTO clause of OUTPUT, or surrounding the whole MERGE statement in brackets (parentheses if you’re American) and using a regular INSERT statement. This is also doable if you’re using MERGE to just do INSERTs. In case you hadn’t realised, you can use MERGE in place of an INSERT statement. It’s just like the UPSERT-style statement we’ve just seen, except that we want nothing to match. That’s easy to do, we just use ON 1=2. This is obviously more convoluted than a straight INSERT. And it’s slightly more effort for the database engine too. But, if you want the extra audit capabilities, the ability to hook into the other source columns is definitely useful. Oh, and before people ask if you can also hook into the target table’s columns... Yes, of course. That’s what deleted and inserted give you.

    Read the article

  • StreamInsight 2.1 Released

    - by Roman Schindlauer
    The wait is over—we are pleased to announce the release of StreamInsight 2.1. Since the release of version 1.2, we have heard your feedbacks and suggestions and based on that we have come up with a whole new set of features. Here are some of the highlights: A New Programming Model – A more clear and consistent object model, eliminating the need for complex input and output adapters (though they are still completely supported). This new model allows you to provision, name, and manage data sources and sinks in the StreamInsight server. Tight integration with Reactive Framework (Rx) – You can write reactive queries hosted inside StreamInsight as well as compose temporal queries on reactive objects. High Availability – Check-pointing over temporal streams and multiple processes with shared computation. Here is how simple coding can be with the 2.1 Programming Model: class Program {     static void Main(string[] args)     {         using (Server server = Server.Create("Default"))         {             // Create an app             Application app = server.CreateApplication("app");             // Define a simple observable which generates an integer every second             var source = app.DefineObservable(() =>                 Observable.Interval(TimeSpan.FromSeconds(1)));             // Define a sink.             var sink = app.DefineObserver(() =>                 Observer.Create<long>(x => Console.WriteLine(x)));             // Define a query to filter the events             var query = from e in source                         where e % 2 == 0                         select e;             // Bind the query to the sink and create a runnable process             using (IDisposable proc = query.Bind(sink).Run("MyProcess"))             {                 Console.WriteLine("Press a key to dispose the process...");                 Console.ReadKey();             }         }     } }   That’s how easily you can define a source, sink and compose a query and run it. Note that we did not replace the existing APIs, they co-exist with the new surface. Stay tuned, you will see a series of articles coming out over the next few weeks about the new features and how to use them. Come and grab it from our download center page and let us know what you think! You can find the updated MSDN documentation here, and we would appreciate if you could provide feedback to the docs as well—best via email to [email protected]. Moreover, we updated our samples to demonstrate the new programming surface. Regards, The StreamInsight Team

    Read the article

  • Using Subjects to Deploy Queries Dynamically

    - by Roman Schindlauer
    In the previous blog posting, we showed how to construct and deploy query fragments to a StreamInsight server, and how to re-use them later. In today’s posting we’ll integrate this pattern into a method of dynamically composing a new query with an existing one. The construct that enables this scenario in StreamInsight V2.1 is a Subject. A Subject lets me create a junction element in an existing query that I can tap into while the query is running. To set this up as an end-to-end example, let’s first define a stream simulator as our data source: var generator = myApp.DefineObservable(     (TimeSpan t) => Observable.Interval(t).Select(_ => new SourcePayload())); This ‘generator’ produces a new instance of SourcePayload with a period of t (system time) as an IObservable. SourcePayload happens to have a property of type double as its payload data. Let’s also define a sink for our example—an IObserver of double values that writes to the console: var console = myApp.DefineObserver(     (string label) => Observer.Create<double>(e => Console.WriteLine("{0}: {1}", label, e)))     .Deploy("ConsoleSink"); The observer takes a string as parameter which is used as a label on the console, so that we can distinguish the output of different sink instances. Note that we also deploy this observer, so that we can retrieve it later from the server from a different process. Remember how we defined the aggregation as an IQStreamable function in the previous article? We will use that as well: var avg = myApp     .DefineStreamable((IQStreamable<SourcePayload> s, TimeSpan w) =>         from win in s.TumblingWindow(w)         select win.Avg(e => e.Value))     .Deploy("AverageQuery"); Then we define the Subject, which acts as an observable sequence as well as an observer. Thus, we can feed a single source into the Subject and have multiple consumers—that can come and go at runtime—on the other side: var subject = myApp.CreateSubject("Subject", () => new Subject<SourcePayload>()); Subject are always deployed automatically. Their name is used to retrieve them from a (potentially) different process (see below). Note that the Subject as we defined it here doesn’t know anything about temporal streams. It is merely a sequence of SourcePayloads, without any notion of StreamInsight point events or CTIs. So in order to compose a temporal query on top of the Subject, we need to 'promote' the sequence of SourcePayloads into an IQStreamable of point events, including CTIs: var stream = subject.ToPointStreamable(     e => PointEvent.CreateInsert<SourcePayload>(e.Timestamp, e),     AdvanceTimeSettings.StrictlyIncreasingStartTime); In a later posting we will show how to use Subjects that have more awareness of time and can be used as a junction between QStreamables instead of IQbservables. Having turned the Subject into a temporal stream, we can now define the aggregate on this stream. We will use the IQStreamable entity avg that we defined above: var longAverages = avg(stream, TimeSpan.FromSeconds(5)); In order to run the query, we need to bind it to a sink, and bind the subject to the source: var standardQuery = longAverages     .Bind(console("5sec average"))     .With(generator(TimeSpan.FromMilliseconds(300)).Bind(subject)); Lastly, we start the process: standardQuery.Run("StandardProcess"); Now we have a simple query running end-to-end, producing results. What follows next is the crucial part of tapping into the Subject and adding another query that runs in parallel, using the same query definition (the “AverageQuery”) but with a different window length. We are assuming that we connected to the same StreamInsight server from a different process or even client, and thus have to retrieve the previously deployed entities through their names: // simulate the addition of a 'fast' query from a separate server connection, // by retrieving the aggregation query fragment // (instead of simply using the 'avg' object) var averageQuery = myApp     .GetStreamable<IQStreamable<SourcePayload>, TimeSpan, double>("AverageQuery"); // retrieve the input sequence as a subject var inputSequence = myApp     .GetSubject<SourcePayload, SourcePayload>("Subject"); // retrieve the registered sink var sink = myApp.GetObserver<string, double>("ConsoleSink"); // turn the sequence into a temporal stream var stream2 = inputSequence.ToPointStreamable(     e => PointEvent.CreateInsert<SourcePayload>(e.Timestamp, e),     AdvanceTimeSettings.StrictlyIncreasingStartTime); // apply the query, now with a different window length var shortAverages = averageQuery(stream2, TimeSpan.FromSeconds(1)); // bind new sink to query and run it var fastQuery = shortAverages     .Bind(sink("1sec average"))     .Run("FastProcess"); The attached solution demonstrates the sample end-to-end. Regards, The StreamInsight Team

    Read the article

  • StreamInsight 2.1 Released

    - by Roman Schindlauer
    The wait is over—we are pleased to announce the release of StreamInsight 2.1. Since the release of version 1.2, we have heard your feedbacks and suggestions and based on that we have come up with a whole new set of features. Here are some of the highlights: A New Programming Model – A more clear and consistent object model, eliminating the need for complex input and output adapters (though they are still completely supported). This new model allows you to provision, name, and manage data sources and sinks in the StreamInsight server. Tight integration with Reactive Framework (Rx) – You can write reactive queries hosted inside StreamInsight as well as compose temporal queries on reactive objects. High Availability – Check-pointing over temporal streams and multiple processes with shared computation. Here is how simple coding can be with the 2.1 Programming Model: class Program {     static void Main(string[] args)     {         using (Server server = Server.Create("Default"))         {             // Create an app             Application app = server.CreateApplication("app");             // Define a simple observable which generates an integer every second             var source = app.DefineObservable(() =>                 Observable.Interval(TimeSpan.FromSeconds(1)));             // Define a sink.             var sink = app.DefineObserver(() =>                 Observer.Create<long>(x => Console.WriteLine(x)));             // Define a query to filter the events             var query = from e in source                         where e % 2 == 0                         select e;             // Bind the query to the sink and create a runnable process             using (IDisposable proc = query.Bind(sink).Run("MyProcess"))             {                 Console.WriteLine("Press a key to dispose the process...");                 Console.ReadKey();             }         }     } }   That’s how easily you can define a source, sink and compose a query and run it. Note that we did not replace the existing APIs, they co-exist with the new surface. Stay tuned, you will see a series of articles coming out over the next few weeks about the new features and how to use them. Come and grab it from our download center page and let us know what you think! You can find the updated MSDN documentation here, and we would appreciate if you could provide feedback to the docs as well—best via email to [email protected]. Moreover, we updated our samples to demonstrate the new programming surface. Regards, The StreamInsight Team

    Read the article

  • Notes from AT&T ARO Session at Oredev 2013

    - by Geertjan
    The mobile internet is 12 times bigger than internet was 12 years ago. Explosive growth, faster networks, and more powerful devices. 85% of users prefer mobile apps, while 56% have problems. Almost 60% want less than 2 second mobile app startup. App with poor mobile experience results in not buying stuff, going to competitor, not liking your company. Battery life. Bad mobile app is worse than no app at all because it turns people away from brand, etc. Apps didn't exist 10 years ago, 72 billion dollars a year in 2013, 151 billion in 2017.Testing performance. Mobile is different than regular app. Need to fix issues before customers discover them. ARO is free and open source AT&T tool for identifying mobile app performance problems. Mobile data is different -- radio resource control state machine. Radio resource control -- radio from idle to continuous reception -- drains battery, sends data, packets coming through, after packets come through radio is still on which is tail time, after 10 seconds of no data coming through radio goes off. For example, YouTube, e.g., 10 to 15 seconds after every connection, can be huge drain on battery, app traffic triggers RRC state. Goal. Balance fast network connectivity against battery usage. ARO is free and open source and test any platform and won awards. How do I test my app? pcap or tcdump network. Native collector: Android and iOS. Android rooted device is needed. Test app on phone, background data, idle for ads and analytics. Graded against 25 best practices. See all the processes, all network traffic mapped to processes, stats about trace, can look just at your app, exlude Facebook, etc. Many tests conducted, e.g., file download, HTML (wrapped applications, e.g., cordova). Best Practices. Make stuff smaller. GZIP, smaller files, download faster, best for files larger than 800 bytes, minification -- remove tabs and commenting -- browser doesn't need that, just give processor what it needs remove wheat from chaff. Images -- make images smaller, 1024x1024 image for a checkmark, swish it, make it 33% smaller, ARO records the screen, probably could be 9 times smaller. Download less stuff. 17% of HTTP content on mobile is duplicate data because of caching, reloading from cache is 75% to 99% faster than downloading again, 75% possible savings which means app will start up faster because using cache -- everyone wants app starting up 2 seconds. Make fewer HTTP requests. Inline and combine CSS and JS when possible reduces the number of requests, spread images used often. Fewer connections. Faster and use less battery, for example, download an image every 60 secs, download an add every 60 seconds, send analytics every 60 seconds -- instead of that, use transaction manager, download everything at once, reduce amount of time connected to network by 40% also -- 80% of applications do NOT close connections when they are finished, e.g., download picture, 10 seconds later the radio turns off, if you do not explicitly close, eventually server closes, 38% more tail time, 40% less energy if you close connection right away, background data traffic is 27% of data and 55% of network time, this kills the battery. Look at redirection. Adds 200 to 600 ms on each connection, waterfall diagram to all the requests -- e.g., xyz.com redirect to www.xyz.com redirect to xyz.mobi to www.xyz.com, waterfall visualization of packets, minimize redirects but redirects are fine. HTML best practices. Order matters and hiding code (JS downloading blocks rendering, always do CSS before JS or JS asynchronously, CSS 'display:none' hides images from user but the browser downloads them which adds latency to application. Some apps turn on GPS for no reason. Tell network when down, but maybe some other app is using the radio at the same time. It's all about knowing best practices: everyone wins with ARO (carriers, e.g., AT&T, developers, customers). Faster apps, better battery usage, network traffic better, better app reviews, happier customers. MBTA app, referenced as an example.ARO is free, open source, can test all platforms.

    Read the article

  • Mirth Transformer Error

    - by Ryan H
    I'm getting the following error when trying to convert HL7v3 to HL7v2 The message passed in is: <?xml version="1.0" encoding="UTF-8" standalone="no"?> <S:Envelope xmlns:S="http://schemas.xmlsoap.org/soap/envelope/"> <S:Body> <PRPA_IN201306UV02 xmlns="urn:hl7-org:v3" xmlns:ns2="urn:gov:hhs:fha:nhinc:common:nhinccommon" xmlns:ns3="urn:gov:hhs:fha:nhinc:common:patientcorrelationfacade" xmlns:ns4="http://schemas.xmlsoap.org/ws/2004/08/addressing" ITSVersion="XML_1.0"> <id extension="4ae5403:12752e71a17:-7b52" root="1.1.1"/> ... </PRPA_IN201306UV02> </S:Body> </S:Envelope> The error I get is: ERROR-300: Transformer error ERROR MESSAGE: Error evaluating transformer com.webreach.mirth.server.MirthJavascriptTransformerException: CHANNEL: v3v2ConversionResponseMessage CONNECTOR: sourceConnector SCRIPT SOURCE: LINE NUMBER: 5 DETAILS: TypeError: The prefix "S" for element "S:Envelope" is not bound. at com.webreach.mirth.server.mule.transformers.JavaScriptTransformer.evaluateScript(JavaScriptTransformer.java:460) at com.webreach.mirth.server.mule.transformers.JavaScriptTransformer.transform(JavaScriptTransformer.java:356) at org.mule.transformers.AbstractEventAwareTransformer.doTransform(AbstractEventAwareTransformer.java:48) at org.mule.transformers.AbstractTransformer.transform(AbstractTransformer.java:197) at org.mule.transformers.AbstractTransformer.transform(AbstractTransformer.java:200) at org.mule.impl.MuleEvent.getTransformedMessage(MuleEvent.java:251) at org.mule.routing.inbound.SelectiveConsumer.isMatch(SelectiveConsumer.java:61) at org.mule.routing.inbound.InboundMessageRouter.route(InboundMessageRouter.java:83) at org.mule.providers.AbstractMessageReceiver$DefaultInternalMessageListener.onMessage(AbstractMessageReceiver.java:493) at org.mule.providers.AbstractMessageReceiver.routeMessage(AbstractMessageReceiver.java:272) at org.mule.providers.AbstractMessageReceiver.routeMessage(AbstractMessageReceiver.java:231) at com.webreach.mirth.connectors.vm.VMMessageReceiver.getMessages(VMMessageReceiver.java:207) at org.mule.providers.TransactedPollingMessageReceiver.poll(TransactedPollingMessageReceiver.java:108) at org.mule.providers.PollingMessageReceiver.run(PollingMessageReceiver.java:90) at org.mule.impl.work.WorkerContext.run(WorkerContext.java:290) at edu.emory.mathcs.backport.java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:650) at edu.emory.mathcs.backport.java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:675) at java.lang.Thread.run(Unknown Source) When I remove the S: tag in front of the Envelope and Body and redefine the namespace to default, it gives me a new error "TypeError: The prefix "xsi" for attribute "xsi:nil" associated with an element type "targetMessage" is not bound." referring to <targetMessage xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:nil="true"/> As if mirth can't handle the namespaces being defined on the same line as the first use of that element. Any suggestions would be useful

    Read the article

  • Android - Getting audio to play through earpiece

    - by Donal Rafferty
    I currently have code that reads a recording in from the devices mic using the AudioRecord class and then playing it back out using the AudioTrack class. My problem is that when I play it out it plays vis the speaker phone. I want it to play out via the ear piece on the device. Here is my code: public class LoopProg extends Activity { boolean isRecording; //currently not used AudioManager am; int count = 0; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); am = (AudioManager) getSystemService(Context.AUDIO_SERVICE); am.setMicrophoneMute(true); while(count <= 1000000){ Record record = new Record(); record.run(); count ++; Log.d("COUNT", "Count is : " + count); } } public class Record extends Thread { static final int bufferSize = 200000; final short[] buffer = new short[bufferSize]; short[] readBuffer = new short[bufferSize]; public void run() { isRecording = true; android.os.Process.setThreadPriority (android.os.Process.THREAD_PRIORITY_URGENT_AUDIO); int buffersize = AudioRecord.getMinBufferSize(11025, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT); AudioRecord arec = new AudioRecord(MediaRecorder.AudioSource.MIC, 11025, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, buffersize); AudioTrack atrack = new AudioTrack(AudioManager.STREAM_MUSIC, 11025, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, buffersize, AudioTrack.MODE_STREAM); am.setRouting(AudioManager.MODE_NORMAL,1, AudioManager.STREAM_MUSIC); int ok = am.getRouting(AudioManager.ROUTE_EARPIECE); Log.d("ROUTING", "getRouting = " + ok); setVolumeControlStream(AudioManager.STREAM_VOICE_CALL); //am.setSpeakerphoneOn(true); Log.d("SPEAKERPHONE", "Is speakerphone on? : " + am.isSpeakerphoneOn()); am.setSpeakerphoneOn(false); Log.d("SPEAKERPHONE", "Is speakerphone on? : " + am.isSpeakerphoneOn()); atrack.setPlaybackRate(11025); byte[] buffer = new byte[buffersize]; arec.startRecording(); atrack.play(); while(isRecording) { arec.read(buffer, 0, buffersize); atrack.write(buffer, 0, buffer.length); } arec.stop(); atrack.stop(); isRecording = false; } } } As you can see if the code I have tried using the AudioManager class and its methods including the deprecated setRouting method and nothing works, the setSpeatPoneOn method seems to have no effect at all, neither does the routing method. Has anyone got any ideas on how to get it to play via the earpiece instead of the spaker phone?

    Read the article

  • RESTful WebServices with Kohana PHP 3

    - by Miller
    Hi, Is it possible to make restful services with kohana 3 , i reviewed the source and found an abstract class Kohana_Controller_REST, how to use it ? If someone can post a snippet with routing as Example code, it will be very appreciated. Also, the lack of documentation on KO3 is making me crazy, if someone knows a well documented, fast and proven PHP framework to use with an 100% javascript Frontend, just let me know, but i would like to stick with Kohana because of the powerful ORM lib. Thanks.

    Read the article

  • Camel-like integration component for Ruby

    - by Matthias Hryniszak
    Hi, I'm in need to get some integration work done in my ruby application. My main focus is web services and ActiveMQ integration (the systems I'm going to connect to are mainly written in Java). I was wondering if there's something that resembles the capabilities of Camel in Ruby? All I need is something that'll let me define routing between incoming data from those sources I mentioned above, do some pre-processing and post them back somewhere else. Thanks, Matthias

    Read the article

  • 404 in ASP.NET MVC with Integrated Pipeline mode

    - by David Martines
    IIS 7.0 (Shared Hosting) ASP.NET 2.0 Integrated Pipeline mode MVC 1.0 I get a 404 on every url except /default.aspx. I have this in my web.config: <system.webServer> <defaultDocument enabled="true"> <files> <clear /> <add value="Default.aspx" /> </files> </defaultDocument> <directoryBrowse enabled="false" /> <validation validateIntegratedModeConfiguration="false" /> <handlers> <add name="ScriptHandlerFactory_asmx" verb="*" path="*.asmx" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add name="ScriptHandlerFactory_axd" verb="*" path="*_AppService.axd" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add name="ScriptResourceHandler" verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add name="MvcHttpHandler" verb="*" path="*.mvc" type="System.Web.Mvc.MvcHttpHandler, System.Web.Mvc, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add name="ErrorLogPageFactory" verb="POST,GET,HEAD" path="elmah.axd" type="Elmah.ErrorLogPageFactory, Elmah" /> </handlers> <modules runAllManagedModulesForAllRequests="true"> <remove name="ScriptModule" /> <remove name="UrlRoutingModule" /> <remove name="ErrorLog" /> <remove name="UnitOfWorkModule" /> <add name="ScriptModule" preCondition="managedHandler" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add name="UrlRoutingModule" type="System.Web.Routing.UrlRoutingModule, System.Web.Routing, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add name="UnitOfWorkModule" type="MusicCompany.Infrastructure.UnitOfWorkModule, MusicCompany.Infrastructure" /> <add name="ErrorLog" type="Elmah.ErrorLogModule, Elmah" /> </modules> The only unusual thing to me is the defaultDocument. It seems I need it because of the way the host (shared hosting) is set up (?) Any clues? Thanks

    Read the article

  • RSpec mocking a nested model in Rails - ActionController problem

    - by emson
    Hi All I am having a problem in RSpec when my mock object is asked for a URL by the ActionController. The URL is a Mock one and not a correct resource URL. I am running RSpec 1.3.0 and Rails 2.3.5 Basically I have two models. Where a subject has many notes. class Subject < ActiveRecord::Base validates_presence_of :title has_many :notes end class Note < ActiveRecord::Base validates_presence_of :title belongs_to :subject end My routes.rb file nests these two resources as such: ActionController::Routing::Routes.draw do |map| map.resources :subjects, :has_many => :notes end The NotesController.rb file looks like this: class NotesController < ApplicationController # POST /notes # POST /notes.xml def create @subject = Subject.find(params[:subject_id]) @note = @subject.notes.create!(params[:note]) respond_to do |format| format.html { redirect_to(@subject) } end end end Finally this is my RSpec spec which should simply post my mocked objects to the NotesController and be executed... which it does: it "should create note and redirect to subject without javascript" do # usual rails controller test setup here subject = mock(Subject) Subject.stub(:find).and_return(subject) notes_proxy = mock('association proxy', { "create!" => Note.new }) subject.stub(:notes).and_return(notes_proxy) post :create, :subject_id => subject, :note => { :title => 'note title', :body => 'note body' } end The problem is that when the RSpec post method is called. The NotesController correctly handles the Mock Subject object, and create! the new Note object. However when the NoteController#Create method tries to redirect_to I get the following error: NoMethodError in 'NotesController should create note and redirect to subject without javascript' undefined method `spec_mocks_mock_url' for #<NotesController:0x1034495b8> Now this is caused by a bit of Rails trickery that passes an ActiveRecord object (@subject, in our case, which isn't ActiveRecord but a Mock object), eventually to url_for who passes all the options to the Rails' Routing, which then determines the URL. My question is how can I mock Subject so that the correct options are passed so that I my test passes. I've tried passing in :controller = 'subjects' options but no joy. Is there some other way of doing this? Thanks...

    Read the article

  • SSL pages under ASP.NET MVC

    - by David Laing
    How do I go about using HTTPS for some of the pages in my ASP.NET MVC based site? Steve Sanderson has a pretty good tutorial on how to do this in a DRY way on Preview 4 at: http://blog.codeville.net/2008/08/05/adding-httpsssl-support-to-aspnet-mvc-routing/ Is there a better / updated way with Preview 5?,

    Read the article

  • Asp.Net MVC missing style and defaults to logon page

    - by user279750
    I just setup an out of the box "W2K8 R2 Web" server and installed IIS 7 out of the box. Then I installed the .NET4 framework and ran "aspnet_regiis -i" command. I created a site using .NET 4.0 Integrated app pool. I created an MVC application using the default MVC project template, without modifying I compiled the project and deploy the files using (Publish) to the virtual directory. I can pull the site up, but the styles are missing from the page and for some reason it routing to the /Account/LogOn?ReturnUrl=/

    Read the article

  • Rails Subdomain Model-Based

    - by ShenoudaB
    Dears, In My Rails Project i'm using subdomain_fu for subdomain support. but i was looking for the Model-Based Subdomain support in subdomain_fu as in my application 2 models have subdomain fields. and i would like to route the application according to the subdomain specified as check the subdomain is related to which model and start routing with representing them to the rails routes. Regards, Shenouda Bertel

    Read the article

  • ASP.NET MVC framework port for Java EE?

    - by Adam Asham
    So I've played some with the new, not yet final release of ASP.NET MVC framework and I find it to be very nice and elegant. However at work we are tied to Java for the time being, so I'm wondering this: is there a port of the framework out there for Java people like myself? I realize that webforms isn't going to be available unfortunately but what about the routing framework? /Adam

    Read the article

< Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >