Search Results

Search found 487 results on 20 pages for 'etl instrumentation'.

Page 17/20 | < Previous Page | 13 14 15 16 17 18 19 20  | Next Page >

  • Android App Crashes On Second Run

    - by user1091286
    My app runs fine on first run. On the Menu I added two choices options and quit. options which set up a new intent who goes to a PreferenceActivity and quit which simply call: "android.os.Process.killProcess(android.os.Process.myPid());" On the second time I run my app (after I quit from inside the emulator) it crashes.. Ideas? the menu is called by the foolowing code: @Override public boolean onCreateOptionsMenu(Menu menu) { getMenuInflater().inflate(R.menu.menu , menu); return true; } - @Override public boolean onOptionsItemSelected(MenuItem item) { // Set up a new intent between the updater service and the main screen Intent options = new Intent(this, OptionsScreenActivity.class); // Switch case on the options switch (item.getItemId()) { case R.id.options: startActivity(options); return true; case R.id.quit: android.os.Process.killProcess(android.os.Process.myPid()); return true; default: return false; } Code for SeekBarPreference: package com.testapp.logic; import com.testapp.R; import android.content.Context; import android.content.res.TypedArray; import android.preference.Preference; import android.util.AttributeSet; import android.util.Log; import android.view.LayoutInflater; import android.view.View; import android.view.ViewGroup; import android.view.ViewParent; import android.widget.RelativeLayout; import android.widget.SeekBar; import android.widget.SeekBar.OnSeekBarChangeListener; import android.widget.TextView; public class SeekBarPreference extends Preference implements OnSeekBarChangeListener { private final String TAG = getClass().getName(); private static final String ANDROIDNS="http://schemas.android.com/apk/res/android"; private static final String PREFS="com.testapp.logic"; private static final int DEFAULT_VALUE = 5; private int mMaxValue = 100; private int mMinValue = 1; private int mInterval = 1; private int mCurrentValue; private String mUnitsLeft = ""; private String mUnitsRight = ""; private SeekBar mSeekBar; private TextView mStatusText; public SeekBarPreference(Context context, AttributeSet attrs) { super(context, attrs); initPreference(context, attrs); } public SeekBarPreference(Context context, AttributeSet attrs, int defStyle) { super(context, attrs, defStyle); initPreference(context, attrs); } private void initPreference(Context context, AttributeSet attrs) { setValuesFromXml(attrs); mSeekBar = new SeekBar(context, attrs); mSeekBar.setMax(mMaxValue - mMinValue); mSeekBar.setOnSeekBarChangeListener(this); } private void setValuesFromXml(AttributeSet attrs) { mMaxValue = attrs.getAttributeIntValue(ANDROIDNS, "max", 100); mMinValue = attrs.getAttributeIntValue(PREFS, "min", 0); mUnitsLeft = getAttributeStringValue(attrs, PREFS, "unitsLeft", ""); String units = getAttributeStringValue(attrs, PREFS, "units", ""); mUnitsRight = getAttributeStringValue(attrs, PREFS, "unitsRight", units); try { String newInterval = attrs.getAttributeValue(PREFS, "interval"); if(newInterval != null) mInterval = Integer.parseInt(newInterval); } catch(Exception e) { Log.e(TAG, "Invalid interval value", e); } } private String getAttributeStringValue(AttributeSet attrs, String namespace, String name, String defaultValue) { String value = attrs.getAttributeValue(namespace, name); if(value == null) value = defaultValue; return value; } @Override protected View onCreateView(ViewGroup parent){ RelativeLayout layout = null; try { LayoutInflater mInflater = (LayoutInflater) getContext().getSystemService(Context.LAYOUT_INFLATER_SERVICE); layout = (RelativeLayout)mInflater.inflate(R.layout.seek_bar_preference, parent, false); } catch(Exception e) { Log.e(TAG, "Error creating seek bar preference", e); } return layout; } @Override public void onBindView(View view) { super.onBindView(view); try { // move our seekbar to the new view we've been given ViewParent oldContainer = mSeekBar.getParent(); ViewGroup newContainer = (ViewGroup) view.findViewById(R.id.seekBarPrefBarContainer); if (oldContainer != newContainer) { // remove the seekbar from the old view if (oldContainer != null) { ((ViewGroup) oldContainer).removeView(mSeekBar); } // remove the existing seekbar (there may not be one) and add ours newContainer.removeAllViews(); newContainer.addView(mSeekBar, ViewGroup.LayoutParams.FILL_PARENT, ViewGroup.LayoutParams.WRAP_CONTENT); } } catch(Exception ex) { Log.e(TAG, "Error binding view: " + ex.toString()); } updateView(view); } /** * Update a SeekBarPreference view with our current state * @param view */ protected void updateView(View view) { try { RelativeLayout layout = (RelativeLayout)view; mStatusText = (TextView)layout.findViewById(R.id.seekBarPrefValue); mStatusText.setText(String.valueOf(mCurrentValue)); mStatusText.setMinimumWidth(30); mSeekBar.setProgress(mCurrentValue - mMinValue); TextView unitsRight = (TextView)layout.findViewById(R.id.seekBarPrefUnitsRight); unitsRight.setText(mUnitsRight); TextView unitsLeft = (TextView)layout.findViewById(R.id.seekBarPrefUnitsLeft); unitsLeft.setText(mUnitsLeft); } catch(Exception e) { Log.e(TAG, "Error updating seek bar preference", e); } } public void onProgressChanged(SeekBar seekBar, int progress, boolean fromUser) { int newValue = progress + mMinValue; if(newValue > mMaxValue) newValue = mMaxValue; else if(newValue < mMinValue) newValue = mMinValue; else if(mInterval != 1 && newValue % mInterval != 0) newValue = Math.round(((float)newValue)/mInterval)*mInterval; // change rejected, revert to the previous value if(!callChangeListener(newValue)){ seekBar.setProgress(mCurrentValue - mMinValue); return; } // change accepted, store it mCurrentValue = newValue; mStatusText.setText(String.valueOf(newValue)); persistInt(newValue); } public void onStartTrackingTouch(SeekBar seekBar) {} public void onStopTrackingTouch(SeekBar seekBar) { notifyChanged(); } @Override protected Object onGetDefaultValue(TypedArray ta, int index){ int defaultValue = ta.getInt(index, DEFAULT_VALUE); return defaultValue; } @Override protected void onSetInitialValue(boolean restoreValue, Object defaultValue) { if(restoreValue) { mCurrentValue = getPersistedInt(mCurrentValue); } else { int temp = 0; try { temp = (Integer)defaultValue; } catch(Exception ex) { Log.e(TAG, "Invalid default value: " + defaultValue.toString()); } persistInt(temp); mCurrentValue = temp; } } } Logcat: E/AndroidRuntime( 4525): FATAL EXCEPTION: main E/AndroidRuntime( 4525): java.lang.RuntimeException: Unable to instantiate activity ComponentInfo{com.ui.testapp/com.logic.testapp.SeekBarPreferen ce}: java.lang.InstantiationException: can't instantiate class com.logic.testapp.SeekBarPreference; no empty constructor E/AndroidRuntime( 4525): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1879) E/AndroidRuntime( 4525): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:1980) E/AndroidRuntime( 4525): at android.app.ActivityThread.access$600(ActivityThread.java:122) E/AndroidRuntime( 4525): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1146) E/AndroidRuntime( 4525): at android.os.Handler.dispatchMessage(Handler.java:99) E/AndroidRuntime( 4525): at android.os.Looper.loop(Looper.java:137) E/AndroidRuntime( 4525): at android.app.ActivityThread.main(ActivityThread.java:4340) E/AndroidRuntime( 4525): at java.lang.reflect.Method.invokeNative(Native Method) E/AndroidRuntime( 4525): at java.lang.reflect.Method.invoke(Method.java:511) E/AndroidRuntime( 4525): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:784) E/AndroidRuntime( 4525): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:551) E/AndroidRuntime( 4525): at dalvik.system.NativeStart.main(Native Method) E/AndroidRuntime( 4525): Caused by: java.lang.InstantiationException: can't instantiate class com.logic.testapp.SeekBarPreference; no empty construc tor E/AndroidRuntime( 4525): at java.lang.Class.newInstanceImpl(Native Method) E/AndroidRuntime( 4525): at java.lang.Class.newInstance(Class.java:1319) E/AndroidRuntime( 4525): at android.app.Instrumentation.newActivity(Instrumentation.java:1023) E/AndroidRuntime( 4525): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1870) E/AndroidRuntime( 4525): ... 11 more W/ActivityManager( 84): Force finishing activity com.ui.testapp/com.logic.testapp.SeekBarPreference W/ActivityManager( 84): Force finishing activity com.ui.testapp/.MainScreen I/WindowManager( 84): createSurface Window{41a90320 paused=false}: DRAW NOW PENDING W/ActivityManager( 84): Activity pause timeout for ActivityRecord{4104a848 com.ui.testapp/com.logic.testapp.SeekBarPreference} W/NetworkManagementSocketTagger( 84): setKernelCountSet(10021, 1) failed with errno -2 I/WindowManager( 84): createSurface Window{412bcc10 com.android.launcher/com.android.launcher2.Launcher paused=false}: DRAW NOW PENDING W/NetworkManagementSocketTagger( 84): setKernelCountSet(10045, 0) failed with errno -2 I/Process ( 4525): Sending signal. PID: 4525 SIG: 9 I/ActivityManager( 84): Process com.ui.testapp (pid 4525) has died. I/WindowManager( 84): WIN DEATH: Window{41a6c9c0 com.ui.testapp/com.ui.testapp.MainScreen paused=true}

    Read the article

  • Install Oracle Configuration Manager's Standalone Collector

    - by Get Proactive Customer Adoption Team
    Untitled Document The Why and the How If you have heard of Oracle Configuration Manager (OCM), but haven’t installed it, I’m guessing this is for one of two reasons. Either you don’t know how it helps you or you don’t know how to install it. I’ll address both of those reasons today. First, let’s take a quick look at how My Oracle Support and the Oracle Configuration Manager work together to gain a good understanding of what their differences and roles are before we tackle the install.   Oracle Configuration Manger is the tool that actually performs the data collection task. You deploy this lightweight piece of software into your system to collect configuration information about the system and OCM uploads that data to Oracle’s customer configuration repository. Oracle Support Engineers then have the configuration data available when you file a service request. You can also view the data through My Oracle Support. The real value is that the data Oracle Configuration Manager collects can help you avoid problems and get your Service Requests solved more quickly. When you view the information in My Oracle Support’s user interface to OCM, it may help you avoid situations that create problems. The proactive tools included in Oracle Configuration Manager help you avoid issues before they occur. You also save time because you didn’t need to open a service request. For example, you can use this capability when you need to compare your system configuration at two points in time, or monitor the system health. If you make the configuration data available to Oracle Support Engineers, when you need to open a Service Request the data helps them diagnose and resolve your critical system issues more quickly, which means you get answers more quickly too. Quick Installation Process Overview Before we dive into the step-by-step details, let me provide a quick overview. For some of you, this will be all you need. Log in to My Oracle Support and download the data collector from Collector tab. If you don’t see the Collector tab, click the More tab gain access. On the Collector tab, you will find a drop-down list showing which platforms are available. You can also see more ways to the Collector can help you if you click through the carousel of benefits. After you download the software for your platform, use FTP to move that file (.zip) from your PC to the server that hosts the Oracle software. Once you have that file on the server, locate the $ORACLE_HOME directory, and unzip the file within that directory. You can then use the command line tool to start the installation process. The installation process requires the My Oracle Support credential (Support Identifier, username, and password) Proxy specification (Host IP Address, Port number, username and password) Installation Step-by-Step Download the collector zip file from My Oracle Support and place it into your $Oracle_Home Unzip the zip file you downloaded from My Oracle Support – this will create a directory named CCR with several subdirectories Using the command line go to “$ORACLE_HOME/CCR/bin” and run the following command “setupCCR” Provide your My Oracle Support credential: login, password, and Support Identifier The installer will start deploying the collector application You have installed the Collector Post Installation Now that you have installed successfully, the scheduler is ready to collect configuration information for the software available in your Oracle Home. By default, the first collection will take place the day after the installation. If you want to run an instrumentation script to start the configuration collection of your Oracle Database server, E-Business Suite, or Enterprise Manager, you will find more details on that in the Installation and Administration Guide for My Oracle Support Configuration Manager. Related documents available on My Oracle Support Oracle Configuration Manager Installation and Administration Guide [ID 728989.5] Oracle Configuration Manager Prerequisites [ID 728473.5] Oracle Configuration Manager Network Connectivity Test [ID 728970.5] Oracle Configuration Manager Collection Overview [ID 728985.5] Oracle Configuration Manager Security Overview [ID 728982.5] Oracle Software Configuration Manager: Disconnected Mode Collection [ID 453412.1]

    Read the article

  • Archiving SQLHelp tweets

    - by jamiet
    #SQLHelp is a Twitter hashtag that can be used by any Twitter user to get help from the SQL Server community. I think its fair to say that in its first year of being it has proved to be a very useful resource however Kendra Little (@kendra_little) made a very salient point yesterday when she tweeted: Is there a way to search the archives of #sqlhelp Trying to remember answer to a question I know I saw a couple months ago http://twitter.com/#!/Kendra_Little/status/15538234184441856 This highlights an inherent problem with Twitter’s search capability – it simply does not reach far enough back in time. I have made steps to remedy that situation by putting into place two initiatives to archive Tweets that contain the #sqlhelp hashtag. The Archivist http://archivist.visitmix.com/ is a free service that, quite simply, archives a history of tweets that contain a given search term by periodically polling Twitter’s search service with that search term and subsequently displaying a dashboard providing an aggregate view of those tweets for things like tweet volume over time, top users and top words (Archivist FAQ). I have set up an archive on The Archivist for “sqlhelp” which you can view at http://archivist.visitmix.com/jamiet/7. Here is a screenshot of the SQLHelp dashboard 36 minutes after I set it up: There is lots of good information in there, including the fact that Jonathan Kehayias (@SQLSarg) is the most active SQLHelp tweeter (I suspect as an answerer rather than a questioner ) and that SSIS has proven to be a rather (ahem) popular subject!! Datasift The Archivist has its uses though for our purposes it has a couple of downsides. For starters you cannot search through an archive (which is what Kendra was after) and nor can you export the contents of the archive for offline analysis. For those functions we need something a bit more heavyweight and for that I present to you Datasift. Datasift is a tool (currently an alpha release) that allows you to search for tweets and provide them through an object called a Datasift stream. That sounds very similar to normal Twitter search though it has one distinct advantage that other Twitter search tools do not – Datasift has access to Twitter’s Streaming API (aka the Twitter Firehose). In addition it has access to a lot of other rather nice features: It provides the Datasift API that allows you to consume the output of a Datasift stream in your tool of choice (bring on my favourite ultimate mashup tool J ) It has a query language (called Filtered Stream Definition Language – FSDL for short) A Datasift stream can consume (and filter) other Datasift streams Datasift can (and does) consume services other than Twitter If I refer to Datasift as “ETL for tweets” then you may get some sort of idea what it is all about. Just as I did with The Archivist I have set up a publicly available Datasift stream for “sqlhelp” at http://datasift.net/stream/1581/sqlhelp. Here is the FSDL query that provides the data: twitter.text contains "sqlhelp" Pretty simple eh? At the current time it provides little more than a rudimentary dashboard but as Datasift is currently an alpha release I think this may be worth keeping an eye on. The real value though is the ability to consume the output of a stream via Datasift’s RESTful API, observe: http://api.datasift.net/stream.xml?stream_identifier=c7015255f07e982afdeebdf1ae6e3c0d&username=jamiet&api_key=XXXXXXX (Note that an api_key is required during the alpha period so, given that I’m not supplying my api_key, this URI will not work for you) Just to prove that a Datasift stream can indeed consume data from another stream I have set up a second stream that further filters the first one for tweets containing “SSIS”. That one is at http://datasift.net/stream/1586/ssis-sqlhelp and here is the FSDL query: rule "414c9845685ff8d2548999cf3162e897" and (interaction.content contains "ssis") When Datasift moves beyond alpha I’ll re-assess how useful this is going to be and post a follow-up blog. @Jamiet

    Read the article

  • My Take on Hadoop World 2011

    - by Jean-Pierre Dijcks
    I’m sure some of you have read pieces about Hadoop World and I did see some headlines which were somewhat, shall we say, interesting? I thought the keynote by Larry Feinsmith of JP Morgan Chase & Co was one of the highlights of the conference for me. The reason was very simple, he addressed some real use cases outside of internet and ad platforms. The following are my notes, since the keynote was recorded I presume you can go and look at Hadoopworld.com at some point… On the use cases that were mentioned: ETL – how can I do complex data transformation at scale Doing Basel III liquidity analysis Private banking – transaction filtering to feed [relational] data marts Common Data Platform – a place to keep data that is (or will be) valuable some day, to someone, somewhere 360 Degree view of customers – become pro-active and look at events across lines of business. For example make sure the mortgage folks know about direct deposits being stopped into an account and ensure the bank is pro-active to service the customer Treasury and Security – Global Payment Hub [I think this is really consolidation of data to cross reference activity across business and geographies] Data Mining Bypass data engineering [I interpret this as running a lot of a large data set rather than on samples] Fraud prevention – work on event triggers, say a number of failed log-ins to the website. When they occur grab web logs, firewall logs and rules and start to figure out who is trying to log in. Is this me, who forget his password, or is it someone in some other country trying to guess passwords Trade quality analysis – do a batch analysis or all trades done and run them through an analysis or comparison pipeline One of the key requests – if you can say it like that – was for vendors and entrepreneurs to make sure that new tools work with existing tools. JPMC has a large footprint of BI Tools and Big Data reporting and tools should work with those tools, rather than be separate. Security and Entitlement – how to protect data within a large cluster from unwanted snooping was another topic that came up. I thought his Elephant ears graph was interesting (couldn’t actually read the points on it, but the concept certainly made some sense) and it was interesting – when asked to show hands – how the audience did not (!) think that RDBMS and Hadoop technology would overlap completely within a few years. Another interesting session was the session from Disney discussing how Disney is building a DaaS (Data as a Service) platform and how Hadoop processing capabilities are mixed with Database technologies. I thought this one of the best sessions I have seen in a long time. It discussed real use case, where problems existed, how they were solved and how Disney planned some of it. The planning focused on three things/phases: Determine the Strategy – Design a platform and evangelize this within the organization Focus on the people – Hire key people, grow and train the staff (and do not overload what you have with new things on top of their day-to-day job), leverage a partner with experience Work on Execution of the strategy – Implement the platform Hadoop next to the other technologies and work toward the DaaS platform This kind of fitted with some of the Linked-In comments, best summarized in “Think Platform – Think Hadoop”. In other words [my interpretation], step back and engineer a platform (like DaaS in the Disney example), then layer the rest of the solutions on top of this platform. One general observation, I got the impression that we have knowledge gaps left and right. On the one hand are people looking for more information and details on the Hadoop tools and languages. On the other I got the impression that the capabilities of today’s relational databases are underestimated. Mostly in terms of data volumes and parallel processing capabilities or things like commodity hardware scale-out models. All in all I liked this conference, it was great to chat with a wide range of people on Oracle big data, on big data, on use cases and all sorts of other stuff. Just hope they get a set of bigger rooms next time… and yes, I hope I’m going to be back next year!

    Read the article

  • The SmartAssembly Rearchitecture

    - by Simon Cooper
    You may have noticed that not a lot has happened to SmartAssembly in the past few months. However, the team has been very busy behind the scenes working on an entirely new version of SmartAssembly. SmartAssembly 6.5 Over the past few releases of SmartAssembly, the team had come to the realisation that the current 'architecture' - grown organically, way before RedGate bought it, from a simple name obfuscator over the years into a full-featured obfuscator and assembly instrumentation tool - was simply not up to the task. Not for what we wanted to do with it at the time, and not what we have planned for the future. Not only was it not up to what we wanted it to do, but it was severely limiting our development capabilities; long-standing bugs in the root architecture that couldn't be fixed, some rather...interesting...design decisions, and convoluted logic that increased the complexity of any bugfix or new feature tenfold. So, we set out to fix this. Earlier this year, a new engine was written on which SmartAssembly would be based. Over the following few months, each feature was ported over to the new engine and extensively tested by our existing unit and integration tests. The engine was linked into the existing UI (no easy task, due to the tight coupling between the UI and old engine), and existing RedGate products were tested on the new SmartAssembly to ensure the new engine acted in the same way. The result is SmartAssembly 6.5. The risks of a rearchitecture Are there risks to rearchitecting a product like SmartAssembly? Of course. There was a lot of undocumented behaviour in the old engine, and as part of the rearchitecture we had to find this behaviour, define it, and document it. In the process we found some behaviour of the old engine that simply did not make sense; hence the changes in pruning & obfuscation behaviour in the release notes. All the special edge cases we had to find, document, and re-implement. There was a chance that these special cases would not be found until near the end of the project, when everything is functionally complete and interacting together. By that stage, it would be hard to go back and change anything without a whole lot of extra work, delaying the release by months. We always knew this was a possibility; our initial estimate of the time required was '4 months, ± 4 months'. And that was including various mitigation strategies to reduce the likelihood of these issues being found right at the end. Fortunately, this worst-case did not happen. However, the rearchitecture did produce some benefits. As well as numerous bug fixes that we could not fix any other way, we've also added logging that lets you find out exactly why a particular field or property wasn't pruned or obfuscated. There's a new command line interface, we've tested it with WP7.1 and Silverlight 5, and we've added a new option to error reporting to improve the performance of instrumented apps by ~10%, at the cost of inaccurate line numbers in reports. So? What differences will I see? Largely none. SmartAssembly 6.5 produces the same output as SmartAssembly 6.2. The performance of 6.5 will be much faster for some users, and generally the same as 6.2 for the remaining. If you've encountered a bug with previous versions of SmartAssembly, I encourage you to try 6.5, as it has most likely been fixed in the rearchitecture. If you encounter a bug with 6.5, please do tell us; we'll be doing another release quite soon, so we'll aim to fix any issues caused by 6.5 in that release. Most importantly, the new architecture finally allows us to implement some Big Things with SmartAssembly we've been planning for many months; these will fundamentally change how you build, release and monitor your application. Stay tuned for further updates!

    Read the article

  • Big Data – Interacting with Hadoop – What is Sqoop? – What is Zookeeper? – Day 17 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the Pig and Pig Latin in Big Data Story. In this article we will understand what is Sqoop and Zookeeper in Big Data Story. There are two most important components one should learn when learning about interacting with Hadoop – Sqoop and Zookper. What is Sqoop? Most of the business stores their data in RDBMS as well as other data warehouse solutions. They need a way to move data to the Hadoop system to do various processing and return it back to RDBMS from Hadoop system. The data movement can happen in real time or at various intervals in bulk. We need a tool which can help us move this data from SQL to Hadoop and from Hadoop to SQL. Sqoop (SQL to Hadoop) is such a tool which extract data from non-Hadoop data sources and transform them into the format which Hadoop can use it and later it loads them into HDFS. Essentially it is ETL tool where it Extracts, Transform and Load from SQL to Hadoop. The best part is that it also does extract data from Hadoop and loads them to Non-SQL (or RDBMS) data stores. Essentially, Sqoop is a command line tool which does SQL to Hadoop and Hadoop to SQL. It is a command line interpreter. It creates MapReduce job behinds the scene to import data from an external database to HDFS. It is very effective and easy to learn tool for nonprogrammers. What is Zookeeper? ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services. In other words Zookeeper is a replicated synchronization service with eventual consistency. In simpler words – in Hadoop cluster there are many different nodes and one node is master. Let us assume that master node fails due to any reason. In this case, the role of the master node has to be transferred to a different node. The main role of the master node is managing the writers as that task requires persistence in order of writing. In this kind of scenario Zookeeper will assign new master node and make sure that Hadoop cluster performs without any glitch. Zookeeper is the Hadoop’s method of coordinating all the elements of these distributed systems. Here are few of the tasks which Zookeepr is responsible for. Zookeeper manages the entire workflow of starting and stopping various nodes in the Hadoop’s cluster. In Hadoop cluster when any processes need certain configuration to complete the task. Zookeeper makes sure that certain node gets necessary configuration consistently. In case of the master node fails, Zookeepr can assign new master node and make sure cluster works as expected. There many other tasks Zookeeper performance when it is about Hadoop cluster and communication. Basically without the help of Zookeeper it is not possible to design any new fault tolerant distributed application. Tomorrow In tomorrow’s blog post we will discuss about very important components of the Big Data Ecosystem – Big Data Analytics. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • PowerShell: Read Excel to Create Inserts

    - by BuckWoody
    I’m writing a series of articles on how to migrate “departmental” data into SQL Server. I also hold workshops on the entire process – from discovering that the data exists to the modeling process and then how to design the Extract, Transform and Load (ETL) process. Finally I write about (and teach) a few methods on actually moving the data. One of those options is to use PowerShell. There are a lot of ways even with that choice, but the one I show is to read two columns from the spreadsheet and output statements that would insert the data using a stored procedure. Of course, you could re-write this as INSERT statements, out to a text file for bcp, or even use a database connection in the script to move the data directly from Excel into SQL Server. This snippet won’t run on your system, of course – it assumes a Microsoft Office Excel 2007 spreadsheet located at c:\temp called VendorList.xlsx. It looks for a tab in that spreadsheet called Vendors. The statement that does the writing just uses one column: Vendor Code. Here’s the breakdown of what I’m doing: In the first block, I connect to Microsoft Office Excel. That connection string is specific to Excel 2007, so if you need a different version you’ll need to look that up. In the second block I set up a selection from the entire spreadsheet based on that tab. Note that if you’re only after certain data you shouldn’t get the whole spreadsheet – that’s just good practice. In the next block I create the text I want, inserting the Vendor Code field as I go. Finally I close the connection. Enjoy! $ExcelConnection= New-Object -com "ADODB.Connection" $ExcelFile="c:\temp\VendorList.xlsx" $ExcelConnection.Open("Provider=Microsoft.ACE.OLEDB.12.0;` Data Source=$ExcelFile;Extended Properties=Excel 12.0;") $strQuery="Select * from [Vendors$]" $ExcelRecordSet=$ExcelConnection.Execute($strQuery) do { Write-Host "EXEC sp_InsertVendors '" $ExcelRecordSet.Fields.Item("Vendor Code").Value "'" $ExcelRecordSet.MoveNext()} Until ($ExcelRecordSet.EOF) $ExcelConnection.Close() Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Logging errors caused by exceptions deep in the application

    - by Kaleb Pederson
    What are best-practices for logging deep within an application's source? Is it bad practice to have multiple event log entries for a single error? For example, let's say that I have an ETL system whose transform step involves: a transformer, pipeline, processing algorithm, and processing engine. In brief, the transformer takes in an input file, parses out records, and sends the records through the pipeline. The pipeline aggregates the results of the processing algorithm (which could do serial or parallel processing). The processing algorithm sends each record through one or more processing engines. So, I have at least four levels: Transformer - Pipeline - Algorithm - Engine. My code might then look something like the following: class Transformer { void Process(InputSource input) { try { var inRecords = _parser.Parse(input.Stream); var outRecords = _pipeline.Transform(inRecords); } catch (Exception ex) { var inner = new ProcessException(input, ex); _logger.Error("Unable to parse source " + input.Name, inner); throw inner; } } } class Pipeline { IEnumerable<Result> Transform(IEnumerable<Record> records) { // NOTE: no try/catch as I have no useful information to provide // at this point in the process var results = _algorithm.Process(records); // examine and do useful things with results return results; } } class Algorithm { IEnumerable<Result> Process(IEnumerable<Record> records) { var results = new List<Result>(); foreach (var engine in Engines) { foreach (var record in records) { try { engine.Process(record); } catch (Exception ex) { var inner = new EngineProcessingException(engine, record, ex); _logger.Error("Engine {0} unable to parse record {1}", engine, record); throw inner; } } } } } class Engine { Result Process(Record record) { for (int i=0; i<record.SubRecords.Count; ++i) { try { Validate(record.subRecords[i]); } catch (Exception ex) { var inner = new RecordValidationException(record, i, ex); _logger.Error( "Validation of subrecord {0} failed for record {1}", i, record ); } } } } There's a few important things to notice: A single error at the deepest level causes three log entries (ugly? DOS?) Thrown exceptions contain all important and useful information Logging only happens when failure to do so would cause loss of useful information at a lower level. Thoughts and concerns: I don't like having so many log entries for each error I don't want to lose important, useful data; the exceptions contain all the important but the stacktrace is typically the only thing displayed besides the message. I can log at different levels (e.g., warning, informational) The higher level classes should be completely unaware of the structure of the lower-level exceptions (which may change as the different implementations are replaced). The information available at higher levels should not be passed to the lower levels. So, to restate the main questions: What are best-practices for logging deep within an application's source? Is it bad practice to have multiple event log entries for a single error?

    Read the article

  • Good DBAs Do Baselines

    - by Louis Davidson
    One morning, you wake up and feel funny. You can’t quite put your finger on it, but something isn’t quite right. What now? Unless you happen to be a hypochondriac, you likely drag yourself out of bed, get on with the day and gather more “evidence”. You check your symptoms over the next few days; do you feel the same, better, worse? If better, then great, it was some temporal issue, perhaps caused by an allergic reaction to some suspiciously spicy chicken. If the same or worse then you go to the doctor for some health advice, but armed with some data to share, and having ruled out certain possible causes that are fixed with a bit of rest and perhaps an antacid. Whether you realize it or not, in comparing how you feel one day to the next, you have taken baseline measurements. In much the same way, a DBA uses baselines to gauge the gauge health of their database servers. Of course, while SQL Server is very willing to share data regarding its health and activities, it has almost no idea of the difference between good and bad. Over time, experienced DBAs develop “mental” baselines with which they can gauge the health of their servers almost as easily as their own body. They accumulate knowledge of the daily, natural state of each part of their database system, and so know instinctively when one of their databases “feels funny”. Equally, they know when an “issue” is just a passing tremor. They see their SQL Server with all of its four CPU cores running close 100% and don’t panic anymore. Why? It’s 5PM and every day the same thing occurs when the end-of-day reports, which are very CPU intensive, are running. Equally, they know when they need to respond in earnest when it is the first time they have heard about an issue, even if it has been happening every day. Nevertheless, no DBA can retain mental baselines for every characteristic of their systems, so we need to collect physical baselines too. In my experience, surprisingly few DBAs do this very well. Part of the problem is that SQL Server provides a lot of instrumentation. If you look, you will find an almost overwhelming amount of data regarding user activity on your SQL Server instances, and use and abuse of the available CPU, I/O and memory. It seems like a huge task even to work out which data you need to collect, let alone start collecting it on a regular basis, managing its storage over time, and performing detailed comparative analysis. However, without baselines, though, it is very difficult to pinpoint what ails a server, just by looking at a single snapshot of the data, or to spot retrospectively what caused the problem by examining aggregated data for the server, collected over many months. It isn’t as hard as you think to get started. You’ve probably already established some troubleshooting queries of the type SELECT Value FROM SomeSystemTableOrView. Capturing a set of baseline values for such a query can be as easy as changing it as follows: INSERT into BaseLine.SomeSystemTable (value, captureTime) SELECT Value, SYSDATETIME() FROM SomeSystemTableOrView; Of course, there are monitoring tools that will collect and manage this baseline data for you, automatically, and allow you to perform comparison of metrics over different periods. However, to get yourself started and to prove to yourself (or perhaps the person who writes the checks for tools) the value of baselines, stick something similar to the above query into an agent job, running every hour or so, and you are on your way with no excuses! Then, the next time you investigate a slow server, and see x open transactions, y users logged in, and z rows added per hour in the Orders table, compare to your baselines and see immediately what, if anything, has changed!

    Read the article

  • Fetching Partition Information

    - by Mike Femenella
    For a recent SSIS package at work I needed to determine the distinct values in a partition, the number of rows in each partition and the file group name on which each partition resided in order to come up with a grouping mechanism. Of course sys.partitions comes to mind for some of that but there are a few other tables you need to link to in order to grab the information required. The table I’m working on contains 8.8 billion rows. Finding the distinct partition keys from this table was not a fast operation. My original solution was to create  a temporary table, grab the distinct values for the partitioned column, then update via sys.partitions for the rows and the $partition function for the partitionid and finally look back to the sys.filegroups table for the filegroup names. It wasn’t pretty, it could take up to 15 minutes to return the results. The primary issue is pulling distinct values from the table. Queries for distinct against 8.8 billion rows don’t go quickly. A few beers into a conversation with a friend and we ended up talking about work which led to a conversation about the task described above. The solution was already built in SQL Server, just needed to pull it together. The first table I needed was sys.partition_range_values. This contains one row for each range boundary value for a partition function. In my case I have a partition function which uses dayid values. For example July 4th would be represented as an int, 20130704. This table lists out all of the dayid values which were defined in the function. This eliminated the need to query my source table for distinct dayid values, everything I needed was already built in here for me. The only caveat was that in my SSIS package I needed to create a bucket for any dayid values that were out of bounds for my function. For example if my function handled 20130501 through 20130704 and I had day values of 20130401 or 20130705 in my table, these would not be listed in sys.partition_range_values. I just created an “everything else” bucket in my ssis package just in case I had any dayid values unaccounted for. To get the number of rows for a partition is very easy. The sys.partitions table contains values for each partition. Easy enough to achieve by querying for the object_id and index value of 1 (the clustered index) The final piece of information was the filegroup name. There are 2 options available to get the filegroup name, sys.data_spaces or sys.filegroups. For my query I chose sys.filegroups but really it’s a matter of preference and data needs. In order to bridge between sys.partitions table and either sys.data_spaces or sys.filegroups you need to get the container_id. This can be done by joining sys.allocation_units.container_id to the sys.partitions.hobt_id. sys.allocation_units contains the field data_space_id which then lets you join in either sys.data_spaces or sys.file_groups. The end result is the query below, which typically executes for me in under 1 second. I’ve included the join to sys.filegroups and to sys.dataspaces, and I’ve  just commented out the join sys.filegroups. As I mentioned above, this shaves a good 10-15 minutes off of my original ssis package and is a really easy tweak to get a boost in my ETL time. Enjoy.

    Read the article

  • PASS Summit 2012: keynote and Mobile BI announcements #sqlpass

    - by Marco Russo (SQLBI)
    Today at PASS Summit 2012 there have been several announcements during the keynote. Moreover, other news have not been highlighted in the keynote but are equally if not more important for the BI community. Let’s start from the big news in the keynote (other details on SQL Server Blog): Hekaton: this is the codename for in-memory OLTP technology that will appear (I suppose) in the next release of the SQL Server relational engine. The improvement in performance and scalability is impressive and it enables new scenarios. I’m curious to see whether it can be used also to improve ETL performance and how it differs from using SSD technology. Updates on Columnstore: In the next major release of SQL Server the columnstore indexes will be updatable and it will be possible to create a clustered index with Columnstore index. This is really a great news for near real-time reporting needs! Polybase: in 2013 it will debut SQL Server 2012 Parallel Data Warehouse (PDW), which will include the Polybase technology. By using Polybase a single T-SQL query will run queries across relational data and Hadoop data. A single query language for both. Sounds really interesting for using BigData in a more integrated way with existing relational databases. And, of course, to load a data warehouse using BigData, which is the ultimate goal that we all BI Pro have, right? SQL Server 2012 SP1: the Service Pack 1 for SQL Server 2012 is available now and it enable the use of PowerPivot for SharePoint and Power View on a SharePoint 2013 installation with Excel 2013. Power View works with Multidimensional cube: the long-awaited feature of being able to use PowerPivot with Multidimensional cubes has been shown by Amir Netz in an amazing demonstration during the keynote. The interesting thing is that the data model behind was based on a many-to-many relationship (something that is not fully supported by Power View with Tabular models). Another interesting aspect is that it is Analysis Services 2012 that supports DAX queries run on a Multidimensional model, enabling the use of any future tool generating DAX queries on top of a Multidimensional model. There are still no info about availability by now, but this is *not* included in SQL Server 2012 SP1. So what about Mobile BI? Well, even if not announced during the keynote, there is a dedicated session on this topic and there are very important news in this area: iOS, Android and Microsoft mobile platforms: the commitment is to get data exploration and visualization capabilities working within June 2013. This should impact at least Power View and SharePoint/Excel Services. This is the type of UI experience we are all waiting for, in order to satisfy the requests coming from users and customers. The important news here is that native applications will be available for both iOS and Windows 8 so it seems that Android will be supported initially only through the web. Unfortunately we haven’t seen any demo, so it’s not clear what will be the offline navigation experience (and whether there will be one). But at least we know that Microsoft is working on native applications in this area. I’m not too surprised that HTML5 is not the magic bullet for all the platforms. The next PASS Business Analytics conference in 2013 seems a good place to see this in action, even if I hope we don’t have to wait other six months before seeing some demo of native BI applications on mobile platforms! Viewing Reporting Services reports on iPad is supported starting with SQL Server 2012 SP1, which has been released today. This is another good reason to install SP1 on SQL Server 2012. If you are at PASS Summit 2012, come and join me, Alberto Ferrari and Chris Webb at our book signing event tomorrow, Thursday 8 2012, at the bookstore between 12:00pm and 12:30pm, or follow one of our sessions!

    Read the article

  • Webcast On-Demand: Building Java EE Apps That Scale

    - by jeckels
    With some awesome work by one of our architects, Randy Stafford, we recently completed a webcast on scaling Java EE apps efficiently. Did you miss it? No problem. We have a replay available on-demand for you. Just hit the '+' sign drop-down for access.Topics include: Domain object caching Service response caching Session state caching JSR-107 HotCache and more! Further, we had several interesting questions asked by our audience, and we thought we'd share a sampling of those here for you - just in case you had the same queries yourself. Enjoy! What is the largest Coherence deployment out there? We have seen deployments with over 500 JVMs in the Coherence cluster, and deployments with over 1000 JVMs using the Coherence jar file, in one system. On the management side there is an ecosystem of monitoring tools from Oracle and third parties with dashboards graphing values from Coherence's JMX instrumentation. For lifecycle management we have seen a lot of custom scripting over the years, but we've also integrated closely with WebLogic to leverage its management ecosystem for deploying Coherence-based applications and managing process life cycles. That integration introduces a new Java EE archive type, the Grid Archive or GAR, which embeds in an EAR and can be seen by a WAR in WebLogic. That integration also doesn't require any extra WebLogic licensing if Coherence is licensed. How is Coherence different from a NoSQL Database like MongoDB? Coherence can be considered a NoSQL technology. It pre-dates the NoSQL movement, having been first released in 2001 whereas the term "NoSQL" was coined in 2009. Coherence has a key-value data model primarily but can also be used for document data models. Coherence manages data in memory currently, though disk persistence is in a future release currently in beta testing. Where the data is managed yields a few differences from the most well-known NoSQL products: access latency is faster with Coherence, though well-known NoSQL databases can manage more data. Coherence also has features that well-known NoSQL database lack, such as grid computing, eventing, and data source integration. Finally Coherence has had 15 years of maturation and hardening from usage in mission-critical systems across a variety of industries, particularly financial services. Can I use Coherence for local caching? Yes, you get additional features beyond just a java.util.Map: you get expiration capabilities, size-limitation capabilities, eventing capabilites, etc. Are there APIs available for GoldenGate HotCache? It's mostly a black box. You configure it, and it just puts objects into your caches. However you can treat it as a glass box, and use Coherence event interceptors to enhance its behavior - and there are use cases for that. Are Coherence caches updated transactionally? Coherence provides several mechanisms for concurrency control. If a project insists on full-blown JTA / XA distributed transactions, Coherence caches can participate as resources. But nobody does that because it's a performance and scalability anti-pattern. At finer granularity, Coherence guarantees strict ordering of all operations (reads and writes) against a single cache key if the operations are done using Coherence's "EntryProcessor" feature. And Coherence has a unique feature called "partition-level transactions" which guarantees atomic writes of multiple cache entries (even in different caches) without requiring JTA / XA distributed transaction semantics.

    Read the article

  • Inaccurate performance counter timer values in Windows Performance Monitor

    - by krisg
    I am implementing instrumentation within an application and have encountered an issue where the value that is displayed in Windows Performance Monitor from a PerformanceCounter is incongruent with the value that is recorded. I am using a Stopwatch to record the duration of a method execution, then first i record the total milliseconds as a double, and secondly i pass the Stopwatch's TimeSpan.Ticks to the PerformanceCounter to be recorded in the Performance Monitor. Creating the Performance Counters in perfmon: var datas = new CounterCreationDataCollection(); datas.Add(new CounterCreationData { CounterName = name, CounterType = PerformanceCounterType.AverageTimer32 }); datas.Add(new CounterCreationData { CounterName = namebase, CounterType = PerformanceCounterType.AverageBase }); PerformanceCounterCategory.Create("Category", "performance data", PerformanceCounterCategoryType.SingleInstance, datas); Then to record i retrieve a pre-initialized counter from a collection and increment: _counters[counter].IncrementBy(timing); _counters[counterbase].Increment(); ...where "timing" is the Stopwatch's TimeSpan.Ticks value. When this runs, the collection of double's, which are the milliseconds values for the Stopwatch's TimeSpan show one set of values, but what appears in PerfMon are a different set of values. For example... two values recorded in the List of milliseconds are: 23322.675, 14230.614 And what appears in PerfMon graph are: 15.546, 9.930 Can someone explain this please?

    Read the article

  • eclipse tomcat debug mode slow - pegs cpu

    - by andersonbd1
    Running Tomcat through eclipse works fine in non-debug mode, but not in debug mode. When I try to start the Tomcat server in debug mode, the console output looks fine for a while, but then starts slowing down and eventually just stops, pegging the cpu at 100%. I don't think it's relevant, but just in case - here's the console output right about when it starts slowing down and eventually stopping (by stopping I mean no more console output, but still 100% cpu). 2009-09-02 14:35:30,859 INFO NONE org.springframework.context.weaving.DefaultContextLoadTimeWeaver:72 - Found Spring's JVM agent for instrumentation 2009-09-02 14:35:49,562 INFO NONE org.springframework.beans.factory.support.DefaultListableBeanFactory:414 - Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@ed889d: defining beans [... 2009-09-02 14:37:31,031 INFO NONE org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean:221 - Building JPA container EntityManagerFactory for persistence unit ... I tried everything I could think of to fix it: cleanesd tomcat working directory restarted eclipse restarted Windows refreshed/cleaned all projects I first had this problem last week using eclipse ganymede. I had been running fine in debug-mode for several months prior to this issue. I didn't make any significant changes to our project that would cause this. Eventually, I upgraded to eclipse galileo which solved my problem. Now 2 days later, I'm having the same problem in galileo. Like I said it works fine in non-debug mode. Any help is much appreciated. I should add that other things work in debug mode - for instance junit tests, so it is something specific to tomcat.

    Read the article

  • AssemblyLoadException in postsharp, problem with arguments from referenced DLLs?

    - by RodH257
    I'm just starting out with postsharp/AOP. I want to make some instrumentation for C# to track the usage of some addins that I write for a peice of software. I am trying to use the OnMethodBoundaryAspect class to take note of the values of some of the parameters when the method is called. Those parameters are types which are referenced in an external DLL. When I add my attribute to the method, the project won't build, I get the following error Error 2 Unhandled exception (2.0.5.1204, 64 bit, CLR 2.0, Release): PostSharp.CodeModel.AssemblyLoadException: Error while loading the assembly "C:\Program Files\Autodesk\Revit Structure 2011\Program\RevitAPI.dll": Could not load file or assembly 'revitapi, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null' or one of its dependencies. Operation is not supported. (Exception from HRESULT: 0x80131515) The REvitAPI.dll is the file with the type in it. I've also tested just adding the attribute to the project but not applying it to any methods, this also causes the error. So it appears its not related to the method parameter types itself, but merely the existence of this DLL. Has anyone come across this issue before, or can anyone point me in the right direction of where to get some more info on this?

    Read the article

  • InstallUtil Publishing WMI Schema to 64 Bit Directory Instead of 32 Bit Directory

    - by Nick
    This is similar to this question, but it doesn't look like a good solution was ever determined, so I'm opening a new one with clarified details. We wrote a .NET service, which among other things, publishes some of the class hierarchy using WMI. On a 64-Bit machine, we are running the 32-bit version of InstallUtil to install the service. It installs successfully, but when the service runs, we receive the following error message when publishing a WMI class using Instrumentation.Publish() DirectoryNotFoundException - (Could not find a part of the path 'C:\Windows\system32\WBEM\Framework\root\MyNamespace\MyService'.) However, this directory does exist in the C:\Windows\syswow64 directory. If we manually copy that directory structure to the system32 directory, then everything works. However, we are looking for automated solution, because we have this packaged up in an MSI which we distribute onto many servers. We have tried running the 64-Bit version of InstallUtil, to see if that would work, however... and this is the really weird part... it gives us an error on install that says Installing WMI Schema: Started An exception occurred during the Install phase. System.IO.DirectoryNotFoundException: Could not find a part of the path 'C:\Windows\system32\WBEM\Framework\root\MyNamespace\MyService.mof'. It looks as if somehow, the WMI installer flipped around. Has anyone else experienced this, or know of a work around?

    Read the article

  • SSAS Cube reprocessing fails - then succeeds if I try again

    - by EdgarVerona
    So I'm basically brand new to the concept of BI, and I've inherited an existing ETL process that is a two step process: 1) Loads the data into a database that is only used by the cube processing 2) Starts off the SSAS cube processing against said database It seems pretty well isolated, but occasionally (once a week, sometimes twice) it will fail with the following exception: "Errors in the OLAP storage engine: The attribute key cannot be found" Now the interesting thing is that: 1) The dimension having the issue is not usually the same one (i.e. there's no single dimension that consistently has this failure) 2) The source table, when I inspect it, does actually contain the attribute key that it says could not be found And, most interestingly... 3) If I then immediately reprocess the dimensions and cubes manually through SSMS, they reprocess successfully and without incident. In both the aforementioned job and when I reprocess them through SSMS, I am using "ProcessFull", so it should be reprocessing them completely. Has anyone run into such an issue? I'm scratching my head about it... because if it was a genuine data integrity issue, reprocessing the cube again wouldn't fix it. What on earth could be happening? I've been tasked with finding out why this happens, but I can neither reproduce it consistently nor can I point to a data integrity problem as the root cause. Thanks for any input you can provide!

    Read the article

  • Loading and binding a serialized view model to a WPF window?

    - by generalt
    Hello all. I'm writing a one-window UI for a simple ETL tool. The UI consists of the window, the code behind for the window, a view model for the window, and the business logic. I wanted to provide functionality to the users to save the state of the UI because the content of about 10-12 text boxes will be reused between sessions, but are specific to the user. I figured I could serialize the view model, which contains all the data from the textboxes, and this works fine, but I'm having trouble loading the information in the serialized XML file back into the text boxes. Constructor of window: public ETLWindow() { InitializeComponent(); _viewModel = new ViewModel(); this.DataContext = _viewModel; _viewModel.State = Constants.STATE_IDLE; Loaded += new RoutedEventHandler(MainWindow_Loaded); } XAML: <TextBox x:Name="targetDirectory" IsReadOnly="true" Text="{Binding TargetDatabaseDirectory, UpdateSourceTrigger=PropertyChanged}"/> ViewModel corresponding property: private string _targetDatabaseDirectory; [XmlElement()] public string TargetDatabaseDirectory { get { return _targetDatabaseDirectory; } set { _targetDatabaseDirectory = value; OnPropertyChanged(DataUtilities.General.Utilities.GetPropertyName(() => new ViewModel().TargetDatabaseDirectory)); } Load event in code behind: private void loadState_Click(object sender, RoutedEventArgs e) { string statePath = this.getFilePath(); _viewModel = ViewModel.LoadModel(statePath); } As you can guess, the LoadModel method deserializes the serialized file on the user's drive. I couldn't find much on the web regarding this issue. I know this probably has something to do with my bindings. Is there some way to refresh on the bindings on the XAML after I deserialize the view model? Or perhaps refresh all properties on the view model? Or am I completely insane thinking any of this could be done? Thanks.

    Read the article

  • changing value of a textview while change in other textview by multiplying

    - by sur007
    Here I am getting parsed data from a URL and now I am trying to change the value of parse data to users only dynamically on an text view and my code is package com.mokshya.jsontutorial; import java.text.DecimalFormat; import java.util.ArrayList; import java.util.HashMap; import org.json.JSONArray; import org.json.JSONException; import org.json.JSONObject; import org.w3c.dom.Document; import org.w3c.dom.Element; import org.w3c.dom.NodeList; import com.mokshya.jsontutorialhos.xmltest.R; import android.app.AlertDialog; import android.app.ListActivity; import android.content.DialogInterface; import android.content.Intent; import android.os.Bundle; import android.text.Editable; import android.text.TextWatcher; import android.util.Log; import android.view.View; import android.widget.AdapterView; import android.widget.AdapterView.OnItemClickListener; import android.widget.EditText; import android.widget.ListAdapter; import android.widget.ListView; import android.widget.SimpleAdapter; import android.widget.TextView; import android.widget.Toast; public class Main extends ListActivity { EditText resultTxt; public double C_webuserDouble; public double C_cashDouble; public double C_transferDouble; public double S_webuserDouble; public double S_cashDouble; public double S_transferDouble; TextView cashTxtView; TextView webuserTxtView; TextView transferTxtView; TextView S_cashTxtView; TextView S_webuserTxtView; TextView S_transferTxtView; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.listplaceholder); cashTxtView = (TextView) findViewById(R.id.cashTxtView); webuserTxtView = (TextView) findViewById(R.id.webuserTxtView); transferTxtView = (TextView) findViewById(R.id.transferTxtView); S_cashTxtView = (TextView) findViewById(R.id.S_cashTxtView); S_webuserTxtView = (TextView) findViewById(R.id.S_webuserTxtView); S_transferTxtView = (TextView) findViewById(R.id.S_transferTxtView); ArrayList<HashMap<String, String>> mylist = new ArrayList<HashMap<String, String>>(); JSONObject json = JSONfunctions .getJSONfromURL("http://ldsclient.com/ftp/strtojson.php"); try { JSONArray netfoxlimited = json.getJSONArray("netfoxlimited"); for (inti = 0; i < netfoxlimited.length(); i++) { HashMap<String, String> map = new HashMap<String, String>(); JSONObject e = netfoxlimited.getJSONObject(i); map.put("date", e.getString("date")); map.put("c_web", e.getString("c_web")); map.put("c_bank", e.getString("c_bank")); map.put("c_cash", e.getString("c_cash")); map.put("s_web", e.getString("s_web")); map.put("s_bank", e.getString("s_bank")); map.put("s_cash", e.getString("s_cash")); mylist.add(map); } } catch (JSONException e) { Log.e("log_tag", "Error parsing data " + e.toString()); } ListAdapter adapter = new SimpleAdapter(this, mylist, R.layout.main, new String[] { "date", "c_web", "c_bank", "c_cash", "s_web", "s_bank", "s_cash", }, new int[] { R.id.item_title, R.id.webuserTxtView, R.id.transferTxtView, R.id.cashTxtView, R.id.S_webuserTxtView, R.id.S_transferTxtView, R.id.S_cashTxtView }); setListAdapter(adapter); final ListView lv = getListView(); lv.setTextFilterEnabled(true); lv.setOnItemClickListener(new OnItemClickListener() { public void onItemClick(AdapterView<?> parent, View view, int position, long id) { @SuppressWarnings("unchecked") HashMap<String, String> o = (HashMap<String, String>) lv .getItemAtPosition(position); Toast.makeText(Main.this, "ID '" + o.get("id") + "' was clicked.", Toast.LENGTH_SHORT).show(); } }); resultTxt = (EditText) findViewById(R.id.editText1); resultTxt.setOnClickListener(new View.OnClickListener() { public void onClick(View arg0) { // TODO Auto-generated method stub resultTxt.setText(""); } }); resultTxt.addTextChangedListener(new TextWatcher() { public void afterTextChanged(Editable arg0) { // TODO Auto-generated method stub String text; text = resultTxt.getText().toString(); if (resultTxt.getText().length() > 5) { calculateSum(C_webuserDouble, C_cashDouble, C_transferDouble); calculateSunrise(S_webuserDouble, S_cashDouble, S_transferDouble); } else { } } public void beforeTextChanged(CharSequence s, int start, int count, int after) { // TODO Auto-generated method stub } public void onTextChanged(CharSequence s, int start, int before, int count) { // TODO Auto-generated method stub } }); } private void calculateSum(Double webuserDouble, Double cashDouble, Double transferDouble) { String Qty; Qty = resultTxt.getText().toString(); if (Qty.length() > 0) { double QtyValue = Double.parseDouble(Qty); double cashResult; double webuserResult; double transferResult; cashResult = cashDouble * QtyValue; webuserResult = webuserDouble * QtyValue; transferResult = transferDouble * QtyValue; DecimalFormat df = new DecimalFormat("#.##"); String cashResultStr = df.format(cashResult); String webuserResultStr = df.format(webuserResult); String transferResultStr = df.format(transferResult); cashTxtView.setText(String.valueOf(cashResultStr)); webuserTxtView.setText(String.valueOf(webuserResultStr)); transferTxtView.setText(String.valueOf(transferResultStr)); // cashTxtView.setFilters(new InputFilter[] {new // DecimalDigitsInputFilter(2)}); } if (Qty.length() == 0) { cashTxtView.setText(String.valueOf(cashDouble)); webuserTxtView.setText(String.valueOf(webuserDouble)); transferTxtView.setText(String.valueOf(transferDouble)); } } private void calculateSunrise(Double webuserDouble, Double cashDouble, Double transferDouble) { String Qty; Qty = resultTxt.getText().toString(); if (Qty.length() > 0) { double QtyValue = Double.parseDouble(Qty); double cashResult; double webuserResult; double transferResult; cashResult = cashDouble * QtyValue; webuserResult = webuserDouble * QtyValue; transferResult = transferDouble * QtyValue; DecimalFormat df = new DecimalFormat("#.##"); String cashResultStr = df.format(cashResult); String webuserResultStr = df.format(webuserResult); String transferResultStr = df.format(transferResult); S_cashTxtView.setText(String.valueOf(cashResultStr)); S_webuserTxtView.setText(String.valueOf(webuserResultStr)); S_transferTxtView.setText(String.valueOf(transferResultStr)); } if (Qty.length() == 0) { S_cashTxtView.setText(String.valueOf(cashDouble)); S_webuserTxtView.setText(String.valueOf(webuserDouble)); S_transferTxtView.setText(String.valueOf(transferDouble)); } } } and I am getting following error on logcat 08-28 15:04:12.839: E/AndroidRuntime(584): Uncaught handler: thread main exiting due to uncaught exception 08-28 15:04:12.848: E/AndroidRuntime(584): java.lang.RuntimeException: Unable to start activity ComponentInfo{com.mokshya.jsontutorialhos.xmltest/com.mokshya.jsontutorial.Main}: java.lang.NullPointerException 08-28 15:04:12.848: E/AndroidRuntime(584): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2401) 08-28 15:04:12.848: E/AndroidRuntime(584): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2417) 08-28 15:04:12.848: E/AndroidRuntime(584): at android.app.ActivityThread.access$2100(ActivityThread.java:116) 08-28 15:04:12.848: E/AndroidRuntime(584): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1794) 08-28 15:04:12.848: E/AndroidRuntime(584): at android.os.Handler.dispatchMessage(Handler.java:99) 08-28 15:04:12.848: E/AndroidRuntime(584): at android.os.Looper.loop(Looper.java:123) 08-28 15:04:12.848: E/AndroidRuntime(584): at android.app.ActivityThread.main(ActivityThread.java:4203) 08-28 15:04:12.848: E/AndroidRuntime(584): at java.lang.reflect.Method.invokeNative(Native Method) 08-28 15:04:12.848: E/AndroidRuntime(584): at java.lang.reflect.Method.invoke(Method.java:521) 08-28 15:04:12.848: E/AndroidRuntime(584): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:791) 08-28 15:04:12.848: E/AndroidRuntime(584): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:549) 08-28 15:04:12.848: E/AndroidRuntime(584): at dalvik.system.NativeStart.main(Native Method) 08-28 15:04:12.848: E/AndroidRuntime(584): Caused by: java.lang.NullPointerException 08-28 15:04:12.848: E/AndroidRuntime(584): at com.mokshya.jsontutorial.Main.onCreate(Main.java:111) 08-28 15:04:12.848: E/AndroidRuntime(584): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1123) 08-28 15:04:12.848: E/AndroidRuntime(584): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2364)

    Read the article

  • How to restrict access to a class's data based on state?

    - by Marcus Swope
    In an ETL application I am working on, we have three basic processes: Validate and parse an XML file of customer information from a third party Match values received in the file to values in our system Load customer data in our system The issue here is that we may need to display the customer information from any or all of the above states to an internal user and there is data in our customer class that will never be populated before the values have been matched in our system (step 2). For this reason, I would like to have the values not even be available to be accessed when the customer is in this state, and I would like to have to avoid some repeated logic everywhere like: if (customer.IsMatched) DisplayTextOnWeb(customer.SomeMatchedValue); My first thought for this was to add a couple interfaces on top of Customer that would only expose the properties and behaviors of the current state, and then only deal with those interfaces. The problem with this approach is that there seems to be no good way to move from an ICustomerWithNoMatchedValues to an ICustomerWithMatchedValues without doing direct casts, etc... (or at least I can't find one). I can't be the first to have come across this, how do you normally approach this? As a last caveat, I would like for this solution to play nice with FluentNHibernate :) Thanks in advance...

    Read the article

  • Throttling CPU/Memory usage of a Thread in Java?

    - by Nalandial
    I'm writing an application that will have multiple threads running, and want to throttle the CPU/memory usage of those threads. There is a similar question for C++, but I want to try and avoid using C++ and JNI if possible. I realize this might not be possible using a higher level language, but I'm curious to see if anyone has any ideas. EDIT: Added a bounty; I'd like some really good, well thought out ideas on this. EDIT 2: The situation I need this for is executing other people's code on my server. Basically it is completely arbitrary code, with the only guarantee being that there will be a main method on the class file. Currently, multiple completely disparate classes, which are loaded in at runtime, are executing concurrently as separate threads. I inherited this code (the original author is gone). The way it's written, it would be a pain to refactor to create separate processes for each class that gets executed. If that's the only good way to limit memory usage via the VM arguments, then so be it. But I'd like to know if there's a way to do it with threads. Even as a separate process, I'd like to be able to somehow limit its CPU usage, since as I mentioned earlier, several of these will be executing at once. I don't want an infinite loop to hog up all the resources. EDIT 3: An easy way to approximate object size is with java's Instrumentation classes; specifically, the getObjectSize method. Note that there is some special setup needed to use this tool.

    Read the article

  • Why this code generates different numbers?

    - by frbry
    Hello, I have this function that creates a unique number for hard-disk and CPU combination. DWORD hw_hash() { char drv[4]; char szNameBuffer[256]; DWORD dwHddUnique; DWORD dwProcessorUnique; DWORD dwUniqueKey; char *sysDrive = getenv ("SystemDrive"); strcpy(drv, sysDrive); drv[2] = '\\'; drv[3] = 0; GetVolumeInformation(drv, szNameBuffer, 256, &dwHddUnique, NULL, NULL, NULL, NULL); SYSTEM_INFO si; GetSystemInfo(&si); dwProcessorUnique = si.dwProcessorType + si.wProcessorArchitecture + si.wProcessorRevision; dwUniqueKey = dwProcessorUnique + dwHddUnique; return dwUniqueKey; } It returns different numbers if I format my hard-disk and install a new Windows. Any ideas, why? Thank you. Edit: OK, Got it: This function returns the volume serial number that the operating system assigns when a hard disk is formatted. To programmatically obtain the hard disk's serial number that the manufacturer assigns, use the Windows Management Instrumentation (WMI) Win32_PhysicalMedia property SerialNumber. I should do more research before posting my problems online. Sorry to bother you, let's keep this here in case anybody else can need it.

    Read the article

  • Code Coverage tool for BlackBerry

    - by Skrud
    I'm looking for a code coverage tool that I can use with a BlackBerry application. I'm using J2ME-Unit for Unit Testing and I want to see how much of my code is being covered by my tests. I've tried using Cobertura for J2ME but after days of wrestling with it I failed to get any results from it. (I believe that the instrumentation is un-done by the RAPC compilation). And despite this message, the project seems to be dead. I've looked at JInjector but the project seems very incomplete. There is little (if any) documentation and although it claims to be able to work with BlackBerry projects, I haven't seen any places where it has been used for that purpose. I've played with the project quite a bit but to no avail. I've also tried the "Coverage" view in the BlackBerry JDE, even though I use Eclipse for development. The view stays permanently blank, regardless of clicking "Refresh" and running the application from the JDE. I've looked at most of the tools on this SO thread, but they won't work with J2ME/BlackBerry projects. Has anyone had any success with any code coverage tools on the BlackBerry? If so, what tools have you used? How have you used them? If anyone has managed to get JInjector or Cobertura for J2ME to work with a BlackBerry project, what did you have to do to get it working?

    Read the article

  • [C#] how to do Exception Handling & Tracing

    - by shrimpy
    Hi all, i am reading some C# books, and got some exercise don't know how to do, or not sure what does the question mean. Problem: After working for a company for some time, your skills as a knowledgeable developer are recognized, and you are given the task of “policing” the implementation of exception handling and tracing in the source code (C#) for an enterprise application that is under constant incremental development. The two goals set by the product architect are: 100% of methods in the entire application must have at least a standard exception handler, using try/catch/finally blocks; more complex methods must also have additional exception handling for specific exceptions All control flow code can optionally write “tracing” information to assist in debugging and instrumentation of the application at run-time in situations where traditional debuggers are not available (eg. on staging and production servers). (i am not quite understand these criterias, i came from the java world, java has two kind of exception, check and unchecked exception. Developer must handle checked exception, and do logging. about unchecked exception, still do logging maybe, but most of the time we just throw it. however here comes to C#, what should i do????) Question for Problem: List rules you would create for the development team to follow, and the ways in which you would enforce rules, to achieve these goals. How would you go about ensuring that all existing code complies with the rules specified by the product architect; in particular, what considerations would impact your planning for the work to ensure all existing code complies?

    Read the article

  • Table not created by Hibernate

    - by User1
    I annotated a bunch of POJO's so JPA can use them to create tables in Hibernate. It appears that all of the tables are created except one very central table called "Revision". The Revision class has an @Entity(name="RevisionT") annotation so it will be renamed to RevisionT so there is not a conflict with any reserved words in MySQL (the target database). I delete the entire database, recreate it and basically open and close a JPA session. All the tables seem to get recreated without a problem. Why would a single table be missing from the created schema? What instrumentation can be used to see what Hibernate is producing and which errors? Thanks. UPDATE: I tried to create as a Derby DB and it was successful. However, one of the fields has a a name of "index". I use @org.hibernate.annotations.IndexColumn to specify the name to something other than a reserved word. However, the column is always called "index" when it is created. Here's a sample of the suspect annotations. @ManyToOne @JoinColumn(name="MasterTopID") @IndexColumn(name="Cx3tHApe") protected MasterTop masterTop; Instead of creating MasterTop.Cx3tHApe as a field, it creates MasterTop.Index. Why is the name ignored?

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20  | Next Page >