Search Results

Search found 60932 results on 2438 pages for 'data operations'.

Page 660/2438 | < Previous Page | 656 657 658 659 660 661 662 663 664 665 666 667  | Next Page >

  • jQuery $.ajax response empty, but only in Chrome

    - by roguepixel
    I've exhausted every avenue of research to solve this one so hopefully someone else will think of something I just didn't. Relatively straight forward setup, I have a html page with some javascript that makes an ajax request to a URL (in the same domain) the java web app in the background does its stuff and returns a partial html page (no html, head or body tags, just the content) which should be inserted at a particular point in the page. All sounds pretty easy and the code I have works in IE, Firefox and Safari, but not in Chrome. In Chrome the target element just ends up empty and if I look at the resource request in Chromes developer tools the response content is also empty. All very confusing, I've tried a myriad of things to solve it and I'm just out of ideas. Any help would be greatly appreciated. var container = $('#container'); $.ajax({ type: 'GET', url: '/path/to/local/url', data: data('parameters=value&another=value2'), dataType: 'html', cache: false, beforeSend: requestBefore, complete: requestComplete, success: requestSuccess, error: requestError }); function data(parameters) { var dictionary = {}; var pairs = parameters.split('&'); for (var i = 0; i < pairs.length; i++) { var keyValuePair = pairs[i].split('='); dictionary[keyValuePair[0]] = keyValuePair[1]; } return dictionary; } function requestBefore() { container.find('.message.error').hide(); container.prepend('<div class="modal"><div class="indicator">Loading...</div></div>'); } function requestComplete() { container.find('.modal').remove(); } function requestSuccess(response) { container.empty(); container.html(response); } function requestError(response) { if (response.status == 200 && response.responseText == 'OK') { requestSuccess(response); } else { container.find('.message.error').fadeIn('slow'); } } All of this is executed in a $(document).ready(function() {}); Cheers, Jim @Oleg - Additional information requested, an example of the response that the ajax call might receive. <p class="message error hidden">An unknown error occured while trying to retrieve data, please try again shortly.</p> <div class="timeline"> <a class="icon shuttle-previous" rel="max_id=16470650733&page=1&q=something">Newer Data</a> <a class="icon shuttle-next" rel="max_id=16470650733&page=3&q=something">Older Data</a> </div> <ol class="social"> <li class="even"> <div class="avatar"> <img src="sphere_normal.gif"/> </div> <p> Some Content<br/> <span class="published">Jun 18, 2010 11:29:05 AM</span> - <a target="_blank" href="">Direct Link</a> </p> </li> <li class="odd"> <div class="avatar"> <img src="sphere_normal.gif"/> </div> <p> Some Content<br/> <span class="published">Jun 18, 2010 11:29:05 AM</span> - <a target="_blank" href="">Direct Link</a> </p> </li> </ol> <div class="timeline"> <a class="icon shuttle-previous" rel="max_id=16470650733&page=1&q=something">Newer Data</a> <a class="icon shuttle-next" rel="max_id=16470650733&page=3&q=something">Older Data</a> </div>

    Read the article

  • Why can i not upload images to my folder anymore?

    - by Hannah_B
    This was something I had working a few weeks back but after I made some changes to my view file images are now no longer being saved into my assets/uploads folder. I keep getting back the error - You did not select a file to upload. This is despite having made sure the path is definitely correct. What am i doing wrong here? Here is my controller: <?php class HomeProfile extends CI_Controller { function HomeProfile() { parent::__construct(); $this->load->model("profiles"); $this->load->model("profileimages"); $this->load->helper(array('form', 'url')); } function upload() { $config['path'] = './web-project-jb/assets/uploads/'; $config['allowed_types'] = 'gif|jpg|jpeg|png'; $config['max_size'] = '10000'; $config['max_width'] = '1024'; $config['max_height'] = '768'; $this->load->library('upload', $config); $img = $this->session->userdata('img'); $username = $this->session->userdata('username'); $this->profileimages->putProfileImage($username, $this->input->post("profileimage")); //fail show upload form if (! $this->upload->do_upload()) { $error = array('error'=>$this->upload->display_errors()); $username = $this->session->userdata('username'); $viewData['username'] = $username; $viewData['profileText'] = $this->profiles->getProfileText($username); $this->load->view('shared/header'); $this->load->view('homeprofile/homeprofiletitle', $viewData); $this->load->view('shared/nav'); $this->load->view('homeprofile/upload_fail', $error); $this->load->view('homeprofile/homeprofileview', $viewData, array('error' => ' ' )); $this->load->view('shared/footer'); //redirect('homeprofile/index'); } else { //successful upload so save to database $file_data = $this->upload->data(); $data['img'] = base_url().'./web-project-jb/assets/uploads/'.$file_data['file_name']; // you may want to delete the image from the server after saving it to db // check to make sure $data['full_path'] is a valid path // get upload_sucess.php from link above //$image = chunk_split( base64_encode( file_get_contents( $data['file_name'] ) ) ); $this->username = $this->session->userdata('username'); $data['profileimages'] = $this->profileimages->getProfileImage($username); $viewData['username'] = $username; $viewData['profileText'] = $this->profiles->getProfileText($username); $username = $this->session->userdata('username'); } } function index() { $username = $this->session->userdata('username'); $data['profileimages'] = $this->profileimages->getProfileImage($username); $viewData['username'] = $username; $viewData['profileText'] = $this->profiles->getProfileText($username); $this->load->view('shared/header'); $this->load->view('homeprofile/homeprofiletitle', $viewData); $this->load->view('shared/nav'); //$this->load->view('homeprofile/upload_form', $data); $this->load->view('homeprofile/homeprofileview', $data, $viewData, array('error' => ' ' ) ); $this->load->view('shared/footer'); } } Here is my view: <div id="maincontent"> <div id="primary"> <?//=$error;?> <?//=$img;?> <h3><?="Profile Image"?></h3> <img src="<?php echo'$img'?>" width='300' height='300'/> <?=form_open_multipart('homeprofile/upload');?> <input type="file" name="img" value=""/> <?=form_submit('submit', 'upload')?> <?=form_close();?> <?php if (isset($error)) echo $error;?> </div> </div> Your help is much appreciated

    Read the article

  • How to optimize this javascript code?

    - by Andrija
    I have a jsp which uses a lot of javascript and it's just not fast enough. I would like to optimize it so first, here's a part of the code: In the jsp I have the initialization: window.onload = function () { formCollection.pageSize.value = "<%= pagingSize%>"; elemCollection = iDom3.Table.all["spis"].XML.DOM; <% if (resultList != null) { %> elementsNumber = <%= resultList.size() %>; <%} else { %> elementsNumber = 0; <% } %> contextPath = "<%= request.getContextPath() %>"; } In my js file I have two types of js functions: // gets the first element and sets it's value to all the other; //the selectSingleNode function is used because I use XSLT transformation //to generate the table _setTehJed = function(){ var resultId = formCollection.elements["idTehJedinice_spis_1"].value; var resultText = formCollection.elements["tehnicka_spis_1"].value; if (resultId != ""){ var counter = 1; while (counter<elementsNumber){ counter++; if(formCollection.elements["idTehJedinice_spis_"+counter] != null){ formCollection.elements["idTehJedinice_spis_"+counter].value=resultId; formCollection.elements["tehnicka_spis_"+counter].value=resultText; } var node=elemCollection.selectSingleNode("/suite/table/rows/row[@id = 'spis_"+counter+"']/data[@col = 'tehnicka']/title"); node.text=resultText; var node2=elemCollection.selectSingleNode("/suite/table/rows/row[@id = 'spis_"+counter+"']/data[@col = 'idTehJedinice']/title"); node2.text=resultId; } } } // sets the elements checkbox to checked or unchecked _SelectCheckRokCuvanja = { all : [], Item : function (oItem, sId) { this.all["spis_"+sId] = oItem.value; if (oItem.checked) { elemCollection.selectSingleNode("/suite/table/rows/row[@id = 'spis_"+sId+"']/data[@col = 'rokCheck']").setAttribute("default", "true"); }else{ elemCollection.selectSingleNode("/suite/table/rows/row[@id = 'spis_"+sId+"']/data[@col = 'rokCheck']").setAttribute("default", "false"); } } } I've used these tips: http://blogs.msdn.com/b/ie/archive/2006/08/28/728654.aspx http://code.google.com/speed/articles/optimizing-javascript.html but I still think something could be done like defining the functions like this: In the jsp: window.onload = function () { iDom3.DigitalnaArhivaPrihvat.formCollection=document.forms["controller"]; iDom3.DigitalnaArhivaPrihvat.formCollection.pageSize.value = "<%= pagingSize%>"; iDom3.DigitalnaArhivaPrihvat.elemCollection = iDom3.Table.all["spis"].XML.DOM; <% if (resultList != null) { %> iDom3.DigitalnaArhivaPrihvat.elementsNumber = <%= resultList.size() %> <%} else { %> iDom3.DigitalnaArhivaPrihvat.elementsNumber = 0; <% } %> } in the js: iDom3.DigitalnaArhivaPrihvat = { formCollection:null, elemCollection:null, elementsNumber:null, _setTehJed : function(){ var resultId = this.formCollection.elements.idTehJedinice_spis_1.value; var resultText = this.formCollection.elements.tehnicka_spis_1.value; if (resultId != ""){ var counter = 1; while (counter<this.elementsNumber){ counter++; if(this.formCollection.elements["idTehJedinice_spis_"+counter] !== null){ this.formCollection.elements["idTehJedinice_spis_"+counter].value=resultId; this.formCollection.elements["tehnicka_spis_"+counter].value=resultText; } var node=this.elemCollection.selectSingleNode("/suite/table/rows/row[@id = 'spis_"+counter+"']/data[@col = 'tehnicka']/title"); node.text=resultText; var node2=this.elemCollection.selectSingleNode("/suite/table/rows/row[@id = 'spis_"+counter+"']/data[@col = 'idTehJedinice']/title"); node2.text=resultId; } } }, _SelectCheckRokCuvanja = { all : [], Item : function (oItem, sId) { this.all["spis_"+sId] = oItem.value; if (oItem.checked) { this.elemCollection.selectSingleNode("/suite/table/rows/row[@id = 'spis_"+sId+"']/data[@col = 'rokCheck']").setAttribute("default", "true"); }else{ this.elemCollection.selectSingleNode("/suite/table/rows/row[@id = 'spis_"+sId+"']/data[@col = 'rokCheck']").setAttribute("default", "false"); } } } but the problem is scoping (if I do it like this, the second function does not execute properly). Any suggestions?

    Read the article

  • Why some pictures are are crooked aftes using my function?

    - by Miko Kronn
    struct BitmapDataAccessor { private readonly byte[] data; private readonly int[] rowStarts; public readonly int Height; public readonly int Width; public BitmapDataAccessor(byte[] data, int width, int height) { this.data = data; this.Height = height; this.Width = width; rowStarts = new int[height]; for (int y = 0; y < Height; y++) rowStarts[y] = y * width; } public byte this[int x, int y, int color] // Maybe use an enum with Red = 0, Green = 1, and Blue = 2 members? { get { return data[(rowStarts[y] + x) * 3 + color]; } set { data[(rowStarts[y] + x) * 3 + color] = value; } } public byte[] Data { get { return data; } } } public static byte[, ,] Bitmap2Byte(Bitmap obraz) { int h = obraz.Height; int w = obraz.Width; byte[, ,] wynik = new byte[w, h, 3]; BitmapData bd = obraz.LockBits(new Rectangle(0, 0, w, h), ImageLockMode.ReadOnly, PixelFormat.Format24bppRgb); int bytes = Math.Abs(bd.Stride) * h; byte[] rgbValues = new byte[bytes]; IntPtr ptr = bd.Scan0; System.Runtime.InteropServices.Marshal.Copy(ptr, rgbValues, 0, bytes); BitmapDataAccessor bda = new BitmapDataAccessor(rgbValues, w, h); for (int i = 0; i < h; i++) { for (int j = 0; j < w; j++) { wynik[j, i, 0] = bda[j, i, 2]; wynik[j, i, 1] = bda[j, i, 1]; wynik[j, i, 2] = bda[j, i, 0]; } } obraz.UnlockBits(bd); return wynik; } public static Bitmap Byte2Bitmap(byte[, ,] tablica) { if (tablica.GetLength(2) != 3) { throw new NieprawidlowyWymiarTablicyException(); } int w = tablica.GetLength(0); int h = tablica.GetLength(1); Bitmap obraz = new Bitmap(w, h, PixelFormat.Format24bppRgb); for (int i = 0; i < w; i++) { for (int j = 0; j < h; j++) { Color kol = Color.FromArgb(tablica[i, j, 0], tablica[i, j, 1], tablica[i, j, 2]); obraz.SetPixel(i, j, kol); } } return obraz; } Now, if I do: private void btnLoad_Click(object sender, EventArgs e) { if (dgOpenFile.ShowDialog() == DialogResult.OK) { try { Bitmap img = new Bitmap(dgOpenFile.FileName); byte[, ,] tab = Grafika.Bitmap2Byte(img); picture.Image = Grafika.Byte2Bitmap(tab); picture.Size = img.Size; } catch (Exception ex) { MessageBox.Show(ex.Message); } } } Most of pictures are handled correctly butsome not. Example of picture that doesn't work: It produce following result (this is only fragment of picture) : Why is that?

    Read the article

  • Android strange behavior with listview and custom cursor adapter

    - by Michael Little
    I have a problem with a list view and a custom cursor adapter and I just can't seem to figure out what is wrong with my code. Basically, in my activity I call initalize() that does a bunch of stuff to handle getting the proper data and initializing the listview. On first run of the activity you can see from the images that one of the items is missing from the list. If I go to another activity and go back to this activity the item that was missing shows up. I believe it has something to do with setContentView(R.layout.parent). If I move that to my initialize() then the item never shows up even when returning from another activity. So, for some reason, returning from another activity bypasses setContentView(R.layout.parent) and everything works fine. I know it's impossible for me to bypass setContentView(R.layout.parent) so I need to figure out what the problem is. Also, I did not include the layout because it is nothing more then two textviews. Also, the images I have attached do not show that the missing item is the last one on the list. Custom Cursor Adapter: public class CustomCursorAdapter extends SimpleCursorAdapter { private Context context; private int layout; public CustomCursorAdapter (Context context, int layout, Cursor c, String[] from, int[] to) { super(context, layout, c, from, to); this.context = context; this.layout = layout; } public View newView(Context context, Cursor cursor, ViewGroup parent) { LayoutInflater inflater = LayoutInflater.from(context); final View view = inflater.inflate(layout, parent, false); return view; } @Override public void bindView(View v, Context context, Cursor c) { if (c.getColumnName(0).matches("section")){ int nameCol = c.getColumnIndex("section"); String section = c.getString(nameCol); TextView section_text = (TextView) v.findViewById(R.id.text1); if ((section.length() > 0)) { section_text.setText(section); } else { //so we don't have an empty spot section_text.setText(""); section_text.setVisibility(2); section_text.setHeight(1); } } else if (c.getColumnName(0).matches("code")) { int nameCol = c.getColumnIndex("code"); String mCode = c.getString(nameCol); TextView code_text = (TextView) v.findViewById(R.id.text1); if (code_text != null) { int i = 167; byte[] data = {(byte) i}; String strSymbol = EncodingUtils.getString(data, "windows-1252"); mCode = strSymbol + " " + mCode; code_text.setText(mCode); code_text.setSingleLine(); } } if (c.getColumnName(1).matches("title")){ int nameCol = c.getColumnIndex("title"); String mTitle = c.getString(nameCol); TextView title_text = (TextView) v.findViewById(R.id.text2); if (title_text != null) { title_text.setText(mTitle); } } else if (c.getColumnName(1).matches("excerpt")) { int nameCol = c.getColumnIndex("excerpt"); String mExcerpt = c.getString(nameCol); TextView excerpt_text = (TextView) v.findViewById(R.id.text2); if (excerpt_text != null) { excerpt_text.setText(mExcerpt); excerpt_text.setSingleLine(); } } } The Activity: public class parent extends ListActivity { private static String[] TITLE_FROM = { SECTION, TITLE, _ID, }; private static String[] CODE_FROM = { CODE, EXCERPT, _ID, }; private static String ORDER_BY = _ID + " ASC"; private static int[] TO = { R.id.text1, R.id.text2, }; String breadcrumb = null; private MyData data; private SQLiteDatabase db; CharSequence parent_id = ""; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); data = new MyData(this); db = data.getReadableDatabase(); setContentView(R.layout.parent); initialize(); } public void initialize() { breadcrumb = null; Bundle bun = getIntent().getExtras(); TextView tvBreadCrumb; tvBreadCrumb = (TextView)findViewById(R.id.breadcrumb); if (bun == null) { //this is the first run tvBreadCrumb.setText(null); tvBreadCrumb.setHeight(0); parent_id = "0"; try { Cursor cursor = getData(parent_id); showSectionData(cursor); } finally { data.close(); } } else { CharSequence state = bun.getString("state"); breadcrumb = bun.getString("breadcrumb"); tvBreadCrumb.setText(breadcrumb); CharSequence code = bun.getString("code"); parent_id = code; if (state.equals("chapter")) { try { Cursor cursor = getData(parent_id); showSectionData(cursor); } finally { data.close(); } } else if (state.equals("code")) { try { Cursor cursor = getCodeData(parent_id); showCodeData(cursor); } finally { data.close(); } } } } @Override public void onStart() { //initialize(); super.onResume(); } @Override public void onResume() { initialize(); super.onResume(); } private Cursor getData(CharSequence parent_id) { Cursor cTitles = db.query(TITLES_TABLE_NAME, TITLE_FROM, "parent_id = " + parent_id, null, null, null, ORDER_BY); Cursor cCodes = db.query(CODES_TABLE_NAME, CODE_FROM, "parent_id = " + parent_id, null, null, null, ORDER_BY); Cursor[] c = {cTitles, cCodes}; Cursor cursor = new MergeCursor(c); startManagingCursor(cursor); return cursor; } private Cursor getCodeData(CharSequence parent_id2) { Bundle bun = getIntent().getExtras(); CharSequence intent = bun.getString("intent"); CharSequence searchtype = bun.getString("searchtype"); //SQLiteDatabase db = data.getReadableDatabase(); if (intent != null) { String sWhere = null; if(searchtype.equals("code")) { sWhere = "code LIKE '%"+parent_id2+"%'"; } else if(searchtype.equals("within")){ sWhere = "definition LIKE '%"+parent_id2+"%'"; } //This is a search request Cursor cursor = db.query(CODES_TABLE_NAME, CODE_FROM, sWhere, null, null, null, ORDER_BY); startManagingCursor(cursor); return cursor; } else { Cursor cursor = db.query(CODES_TABLE_NAME, CODE_FROM, "parent_id = "+ parent_id2, null, null, null, ORDER_BY); startManagingCursor(cursor); return cursor; } } private void showSectionData(Cursor cursor) { CustomCursorAdapter adapter= new CustomCursorAdapter(this, R.layout.item, cursor, TITLE_FROM, TO); setListAdapter(adapter); } private void showCodeData(Cursor cursor) { CustomCursorAdapter adapter = new CustomCursorAdapter(this, R.layout.item, cursor, CODE_FROM, TO); setListAdapter(adapter); Bundle bun = getIntent().getExtras(); CharSequence intent = bun.getString("intent"); if (intent != null) { Cursor cursor1 = ((CursorAdapter)getListAdapter()).getCursor(); startManagingCursor(cursor1); TextView tvBreadCrumb; tvBreadCrumb = (TextView)findViewById(R.id.breadcrumb); tvBreadCrumb.setText(cursor1.getCount() + " Records Found"); //cursor1.close(); //mdl } }

    Read the article

  • Neo4j Windows Plugin Installation desktop V2.M06

    - by user2904850
    I have downloaded and installed the latest Window V2 Community M06 build of Neo4j on a windows 7 64 bit machine (I have tried the 32bit and 64 bit installs). Installation proceeds without and problems and the system runs normally but there is no /plugins directory (on both versions):- \Program Files (x86)\Neo4j Community\ \Program Files (x86)\Neo4j Community\.install4j \Program Files (x86)\Neo4j Community\bin I am trying to install the Spatial plugin .. so I tried creating the \plugins directory. I extracted the zip file and left the zip file in the directory but the plugins are not found:- C:\Users\WFN44217>curl localhost:7474/db/data/ { "extensions" : { }, "node" : "http://localhost:7474/db/data/node", "reference_node" : "http://localhost:7474/db/data/node/0", "node_index" : "http://localhost:7474/db/data/index/node", "relationship_index" : "http://localhost:7474/db/data/index/relationship", "extensions_info" : "http://localhost:7474/db/data/ext", "relationship_types" : "http://localhost:7474/db/data/relationship/types", "batch" : "http://localhost:7474/db/data/batch", "cypher" : "http://localhost:7474/db/data/cypher", "transaction" : "http://localhost:7474/db/data/transaction", "neo4j_version" : "2.0.0-M06" } I have tried some other plugins, but these are also not found. Any idea what might be missing? Extract from the log files: 2013-10-17 11:19:31.881+0000 INFO [o.n.k.i.DiagnosticsManager]: VM Arguments: [-Dexe4j.semaphoreName=Local\c:_program_files_neo4j_community_bin_neo4j-community.exe, -Dexe4j.isInstall4j=true, -Dexe4j.moduleName=C:\Program Files\Neo4j Community\bin\neo4j-community.exe, -Dexe4j.processCommFile=C:\Users\WFN44217\AppData\Local\Temp\e4j_p6384.tmp, -Dexe4j.tempDir=, -Dexe4j.unextractedPosition=0, -Djava.library.path=C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files\Microsoft\Web Platform Installer\;C:\Program Files (x86)\Microsoft ASP.NET\ASP.NET Web Pages\v1.0\;C:\Program Files (x86)\Windows Kits\8.0\Windows Performance Toolkit\;C:\Program Files\Microsoft SQL Server\110\Tools\Binn\;C:\Program Files\Microsoft SQL Server\110\DTS\Binn\;C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn\;C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn\ManagementStudio\;C:\Program Files (x86)\Microsoft SQL Server\110\DTS\Binn\;C:\apache-maven-3.1.1-bin\apache-maven-3.1.1\bin\;C:\Program Files\Java\jdk1.7.0_40\bin\;C:\Program Files (x86)\Git\cmd;C:\Program Files (x86)\Git\bin;c:\ikvm-7.2.4630.5\bin;C:\Program Files\nodejs\;C:\Users\WFN44217\AppData\Roaming\npm;c:\program files\neo4j community\jre\bin, -Dexe4j.consoleCodepage=cp0, -Dinstall4j.launcherId=24, -Dinstall4j.swt=false] 2013-10-17 11:19:31.881+0000 INFO [o.n.k.i.DiagnosticsManager]: Java classpath: 2013-10-17 11:19:31.883+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.0] file:/C:/Program%20Files/Neo4j%20Community/bin/neo4j-desktop-2.0.0-M06.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [bootstrap] C:\Program Files\Neo4j Community\jre\lib\charsets.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [classpath] C:\Program Files\Neo4j Community\.install4j\i4jruntime.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/jaccess.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/zipfs.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [bootstrap] C:\Program Files\Neo4j Community\jre\lib\resources.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [bootstrap] C:\Program Files\Neo4j Community\jre\lib\jfr.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [bootstrap] C:\Program Files\Neo4j Community\jre\lib\jsse.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [classpath] C:\Program Files\Neo4j Community\bin\neo4j-desktop-2.0.0-M06.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/sunec.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [bootstrap] C:\Program Files\Neo4j Community\jre\classes 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [bootstrap] C:\Program Files\Neo4j Community\jre\lib\rt.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.0] file:/C:/Program%20Files/Neo4j%20Community/bin/ 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/sunmscapi.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/dns_sd.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/dnsns.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/sunjce_provider.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/localedata.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/access-bridge-64.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.0] file:/C:/Program%20Files/Neo4j%20Community/.install4j/i4jruntime.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [bootstrap] C:\Program Files\Neo4j Community\jre\lib\sunrsasign.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [bootstrap] C:\Program Files\Neo4j Community\jre\lib\jce.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: Library path: 2013-10-17 11:19:31.885+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Windows\System32 2013-10-17 11:19:31.885+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Windows 2013-10-17 11:19:31.885+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Windows\System32\wbem 2013-10-17 11:19:31.885+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Windows\System32\WindowsPowerShell\v1.0 2013-10-17 11:19:31.886+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files\Microsoft\Web Platform Installer 2013-10-17 11:19:31.886+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files (x86)\Microsoft ASP.NET\ASP.NET Web Pages\v1.0 2013-10-17 11:19:31.886+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files (x86)\Windows Kits\8.0\Windows Performance Toolkit 2013-10-17 11:19:31.886+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files\Microsoft SQL Server\110\Tools\Binn 2013-10-17 11:19:31.887+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files\Microsoft SQL Server\110\DTS\Binn 2013-10-17 11:19:31.887+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn 2013-10-17 11:19:31.887+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn\ManagementStudio 2013-10-17 11:19:31.888+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files (x86)\Microsoft SQL Server\110\DTS\Binn 2013-10-17 11:19:31.888+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\apache-maven-3.1.1-bin\apache-maven-3.1.1\bin 2013-10-17 11:19:31.888+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files\Java\jdk1.7.0_40\bin 2013-10-17 11:19:31.888+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files (x86)\Git\cmd 2013-10-17 11:19:31.889+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files (x86)\Git\bin 2013-10-17 11:19:31.889+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\ikvm-7.2.4630.5\bin 2013-10-17 11:19:31.889+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files\nodejs 2013-10-17 11:19:31.889+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Users\WFN44217\AppData\Roaming\npm 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files\Neo4j Community\jre\bin 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: System.properties: 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: exe4j.moduleName = C:\Program Files\Neo4j Community\bin\neo4j-community.exe 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: exe4j.processCommFile = C:\Users\WFN44217\AppData\Local\Temp\e4j_p6384.tmp 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: exe4j.semaphoreName = Local\c:_program_files_neo4j_community_bin_neo4j-community.exe 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: sun.boot.library.path = c:\program files\neo4j community\jre\bin 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: exe4j.consoleCodepage = cp0 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: path.separator = ; 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: file.encoding.pkg = sun.io 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: user.country = GB 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: user.script = 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: sun.os.patch.level = Service Pack 1 2013-10-17 11:19:31.891+0000 INFO [o.n.k.i.DiagnosticsManager]: install4j.exeDir = C:\Program Files\Neo4j Community\bin\ 2013-10-17 11:19:31.891+0000 INFO [o.n.k.i.DiagnosticsManager]: user.dir = C:\Program Files\Neo4j Community\bin

    Read the article

  • How can I get the following compiled on UVA?

    - by Michael Tsang
    Note the comment below. It cannot compiled on UVA because of a bug in GCC. #include <cstdio> #include <cstring> #include <cctype> #include <map> #include <stdexcept> class Board { public: bool read(FILE *); enum Colour {none, white, black}; Colour check() const; private: struct Index { size_t x; size_t y; Index &operator+=(const Index &) throw(std::range_error); Index operator+(const Index &) const throw(std::range_error); }; const static std::size_t size = 8; char data[size][size]; // Cannot be compiled on GCC 4.1.2 due to GCC bug 29993 // http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29993 typedef bool CheckFunction(Colour, const Index &) const; CheckFunction pawn, knight, bishop, king, rook; bool queen(const Colour c, const Index &location) const { return rook(c, location) || bishop(c, location); } static char get_king(Colour c) { return c == white ? 'k' : 'K'; } template<std::size_t n> bool check_consecutive(Colour c, const Index &location, const Index (&offsets)[n]) const { for(const Index *p = offsets; p != (&offsets)[1]; ++p) { try { Index target = location + *p; for(; data[target.x][target.y] == '.'; target += *p) { } if(data[target.x][target.y] == get_king(c)) return true; } catch(std::range_error &) { } } return false; } template<std::size_t n> bool check_distinct(Colour c, const Index &location, const Index (&offsets)[n]) const { for(const Index *p = offsets; p != (&offsets)[1]; ++p) { try { Index target = location + *p; if(data[target.x][target.y] == get_king(c)) return true; } catch(std::range_error &) { } } return false; } }; int main() { Board board; for(int d = 1; board.read(stdin); ++d) { Board::Colour c = board.check(); const char *sp; switch(c) { case Board::black: sp = "white"; break; case Board::white: sp = "black"; break; case Board::none: sp = "no"; break; } std::printf("Game #%d: %s king is in check.\n", d, sp); std::getchar(); // discard empty line } } bool Board::read(FILE *f) { static const char empty[] = "........" "........" "........" "........" "........" "........" "........" "........"; // 64 dots for(char (*p)[size] = data; p != (&data)[1]; ++p) { std::fread(*p, size, 1, f); std::fgetc(f); // discard new-line } return std::memcmp(empty, data, sizeof data); } Board::Colour Board::check() const { std::map<char, CheckFunction Board::*> fp; fp['P'] = &Board::pawn; fp['N'] = &Board::knight; fp['B'] = &Board::bishop; fp['Q'] = &Board::queen; fp['K'] = &Board::king; fp['R'] = &Board::rook; for(std::size_t i = 0; i != size; ++i) { for(std::size_t j = 0; j != size; ++j) { CheckFunction Board::* p = fp[std::toupper(data[i][j])]; if(p) { Colour ret; if(std::isupper(data[i][j])) ret = white; else ret = black; if((this->*p)(ret, (Index){i, j}/* C99 extension */)) return ret; } } } return none; } bool Board::pawn(const Colour c, const Index &location) const { const std::ptrdiff_t sh = c == white ? -1 : 1; const Index offsets[] = { {sh, 1}, {sh, -1} }; return check_distinct(c, location, offsets); } bool Board::knight(const Colour c, const Index &location) const { static const Index offsets[] = { {1, 2}, {2, 1}, {2, -1}, {1, -2}, {-1, -2}, {-2, -1}, {-2, 1}, {-1, 2} }; return check_distinct(c, location, offsets); } bool Board::bishop(const Colour c, const Index &location) const { static const Index offsets[] = { {1, 1}, {1, -1}, {-1, -1}, {-1, 1} }; return check_consecutive(c, location, offsets); } bool Board::rook(const Colour c, const Index &location) const { static const Index offsets[] = { {1, 0}, {0, -1}, {0, 1}, {-1, 0} }; return check_consecutive(c, location, offsets); } bool Board::king(const Colour c, const Index &location) const { static const Index offsets[] = { {-1, -1}, {-1, 0}, {-1, 1}, {0, 1}, {1, 1}, {1, 0}, {1, -1}, {0, -1} }; return check_distinct(c, location, offsets); } Board::Index &Board::Index::operator+=(const Index &rhs) throw(std::range_error) { if(x + rhs.x >= size || y + rhs.y >= size) throw std::range_error("result is larger than size"); x += rhs.x; y += rhs.y; return *this; } Board::Index Board::Index::operator+(const Index &rhs) const throw(std::range_error) { Index ret = *this; return ret += rhs; }

    Read the article

  • Autocorrelation returns random results with mic input (using a high pass filter)

    - by Niall
    Hello, Sorry to ask a similar question to the one i asked before (FFT Problem (Returns random results)), but i've looked up pitch detection and autocorrelation and have found some code for pitch detection using autocorrelation. Im trying to do pitch detection of a users singing. Problem is, it keeps returning random results. I've got some code from http://code.google.com/p/yaalp/ which i've converted to C++ and modified (below). My sample rate is 2048, and data size is 1024. I'm detecting pitch of both a sine wave and mic input. The frequency of the sine wave is 726.0, and its detecting it to be 722.950820 (which im ok with), but its detecting the pitch of the mic as a random number from around 100 to around 1050. I'm now using a High pass filter to remove the DC offset, but it's not working. Am i doing it right, and if so, what else can i do to fix it? Any help would be greatly appreciated! double* doHighPassFilter(short *buffer) { // Do FFT: int bufferLength = 1024; float *real = malloc(bufferLength*sizeof(float)); float *real2 = malloc(bufferLength*sizeof(float)); for(int x=0;x<bufferLength;x++) { real[x] = buffer[x]; } fft(real, bufferLength); for(int x=0;x<bufferLength;x+=2) { real2[x] = real[x]; } for (int i=0; i < 30; i++) //Set freqs lower than 30hz to zero to attenuate the low frequencies real2[i] = 0; // Do inverse FFT: inversefft(real2,bufferLength); double* real3 = (double*)real2; return real3; } double DetectPitch(short* data) { int sampleRate = 2048; //Create sine wave double *buffer = malloc(1024*sizeof(short)); double amplitude = 0.25 * 32768; //0.25 * max length of short double frequency = 726.0; for (int n = 0; n < 1024; n++) { buffer[n] = (short)(amplitude * sin((2 * 3.14159265 * n * frequency) / sampleRate)); } doHighPassFilter(data); printf("Pitch from sine wave: %f\n",detectPitchCalculation(buffer, 50.0, 1000.0, 1, 1)); printf("Pitch from mic: %f\n",detectPitchCalculation(data, 50.0, 1000.0, 1, 1)); return 0; } // These work by shifting the signal until it seems to correlate with itself. // In other words if the signal looks very similar to (signal shifted 200 data) than the fundamental period is probably 200 data // Note that the algorithm only works well when there's only one prominent fundamental. // This could be optimized by looking at the rate of change to determine a maximum without testing all periods. double detectPitchCalculation(double* data, double minHz, double maxHz, int nCandidates, int nResolution) { //-------------------------1-------------------------// // note that higher frequency means lower period int nLowPeriodInSamples = hzToPeriodInSamples(maxHz, 2048); int nHiPeriodInSamples = hzToPeriodInSamples(minHz, 2048); if (nHiPeriodInSamples <= nLowPeriodInSamples) printf("Bad range for pitch detection."); if (1024 < nHiPeriodInSamples) printf("Not enough data."); double *results = new double[nHiPeriodInSamples - nLowPeriodInSamples]; //-------------------------2-------------------------// for (int period = nLowPeriodInSamples; period < nHiPeriodInSamples; period += nResolution) { double sum = 0; // for each sample, find correlation. (If they are far apart, small) for (int i = 0; i < 1024 - period; i++) sum += data[i] * data[i + period]; double mean = sum / 1024.0; results[period - nLowPeriodInSamples] = mean; } //-------------------------3-------------------------// // find the best indices int *bestIndices = findBestCandidates(nCandidates, results, nHiPeriodInSamples - nLowPeriodInSamples - 1); //note findBestCandidates modifies parameter // convert back to Hz double *res = new double[nCandidates]; for (int i=0; i < nCandidates;i++) res[i] = periodInSamplesToHz(bestIndices[i]+nLowPeriodInSamples, 2048); double pitch2 = res[0]; free(res); free(results); return pitch2; } /// Finds n "best" values from an array. Returns the indices of the best parts. /// (One way to do this would be to sort the array, but that could take too long. /// Warning: Changes the contents of the array!!! Do not use result array afterwards. int* findBestCandidates(int n, double* inputs,int length) { //int length = inputs.Length; if (length < n) printf("Length of inputs is not long enough."); int *res = new int[n]; double minValue = 0; for (int c = 0; c < n; c++) { // find the highest. double fBestValue = minValue; int nBestIndex = -1; for (int i = 0; i < length; i++) { if (inputs[i] > fBestValue) { nBestIndex = i; fBestValue = inputs[i]; } } // record this highest value res[c] = nBestIndex; // now blank out that index. if(nBestIndex!=-1) inputs[nBestIndex] = minValue; } return res; } int hzToPeriodInSamples(double hz, int sampleRate) { return (int)(1 / (hz / (double)sampleRate)); } double periodInSamplesToHz(int period, int sampleRate) { return 1 / (period / (double)sampleRate); } Thanks, Niall. Edit: Changed the code to implement a high pass filter with a cutoff of 30hz (from What Are High-Pass and Low-Pass Filters?, can anyone tell me how to convert the low-pass filter using convolution to a high-pass one?) but it's still returning random results. Plugging it into a VST host and using VST plugins to compare spectrums isn't an option to me unfortunately.

    Read the article

  • Optimizing sorting container of objects with heap-allocated buffers - how to avoid hard-copying buff

    - by Kache4
    I was making sure I knew how to do the op= and copy constructor correctly in order to sort() properly, so I wrote up a test case. After getting it to work, I realized that the op= was hard-copying all the data_. I figure if I wanted to sort a container with this structure (its elements have heap allocated char buffer arrays), it'd be faster to just swap the pointers around. Is there a way to do that? Would I have to write my own sort/swap function? #include <deque> //#include <string> //#include <utility> //#include <cstdlib> #include <cstring> #include <iostream> //#include <algorithm> // I use sort(), so why does this still compile when commented out? #include <boost/filesystem.hpp> #include <boost/foreach.hpp> using namespace std; namespace fs = boost::filesystem; class Page { public: // constructor Page(const char* path, const char* data, int size) : path_(fs::path(path)), size_(size), data_(new char[size]) { // cout << "Creating Page..." << endl; strncpy(data_, data, size); // cout << "done creating Page..." << endl; } // copy constructor Page(const Page& other) : path_(fs::path(other.path())), size_(other.size()), data_(new char[other.size()]) { // cout << "Copying Page..." << endl; strncpy(data_, other.data(), size_); // cout << "done copying Page..." << endl; } // destructor ~Page() { delete[] data_; } // accessors const fs::path& path() const { return path_; } const char* data() const { return data_; } int size() const { return size_; } // operators Page& operator = (const Page& other) { if (this == &other) return *this; char* newImage = new char[other.size()]; strncpy(newImage, other.data(), other.size()); delete[] data_; data_ = newImage; path_ = fs::path(other.path()); size_ = other.size(); return *this; } bool operator < (const Page& other) const { return path_ < other.path(); } private: fs::path path_; int size_; char* data_; }; class Book { public: Book(const char* path) : path_(fs::path(path)) { cout << "Creating Book..." << endl; cout << "pushing back #1" << endl; pages_.push_back(Page("image1.jpg", "firstImageData", 14)); cout << "pushing back #3" << endl; pages_.push_back(Page("image3.jpg", "thirdImageData", 14)); cout << "pushing back #2" << endl; pages_.push_back(Page("image2.jpg", "secondImageData", 15)); cout << "testing operator <" << endl; cout << pages_[0].path().string() << (pages_[0] < pages_[1]? " < " : " > ") << pages_[1].path().string() << endl; cout << pages_[1].path().string() << (pages_[1] < pages_[2]? " < " : " > ") << pages_[2].path().string() << endl; cout << pages_[0].path().string() << (pages_[0] < pages_[2]? " < " : " > ") << pages_[2].path().string() << endl; cout << "sorting" << endl; BOOST_FOREACH (Page p, pages_) cout << p.path().string() << endl; sort(pages_.begin(), pages_.end()); cout << "done sorting\n"; BOOST_FOREACH (Page p, pages_) cout << p.path().string() << endl; cout << "checking datas" << endl; BOOST_FOREACH (Page p, pages_) { char data[p.size() + 1]; strncpy((char*)&data, p.data(), p.size()); data[p.size()] = '\0'; cout << p.path().string() << " " << data << endl; } cout << "done Creating Book" << endl; } private: deque<Page> pages_; fs::path path_; }; int main() { Book* book = new Book("/some/path/"); }

    Read the article

  • android webview returns blank page when load dynamic html page

    - by user2962555
    I am trying to click one button to load a page into a div block dynamically. To test it, I try to append a list item with text "abc" into the loaded page. However, I always get a blank page. load function works fine because if I try to load a static page, it works. Following is my main html page code. <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <title>LoadPageTest</title> <link rel="stylesheet" href="http://fonts.googleapis.com/css?family=Open+Sans:300,400,700"> <link rel="stylesheet" href="./css/customizedstyle.css"> <link rel="stylesheet" href="./css/themes/default/jquery.mobile-1.4.3.min.css"> <link rel="stylesheet" href="./css/jqm-demos.css"> <script src="./js/jquery.js"></script> <script scr="./js/customizedjs.js"></script> <script src="./js/jquery.mobile-1.4.3.min.js"></script> <script> $( document ).on( "pagecreate", "#demo-page", function() { $( document ).on( "swipeleft swiperight", "#demo-page", function( e ) { if ( $( ".ui-page-active" ).jqmData( "panel" ) !== "open" ) { if ( e.type === "swipeleft" ) { $( "#right-panel" ).panel( "open" ); } } }); }); </script> <style type="text/css"> body { overflow:hidden; } </style> </head> <body style= "overflow:hidden" scrolling="no"> <style type="text/css"> body { overflow:hidden; } </style> <div data-role="page" id="main-page" style= "overflow:hidden" scrolling="no"> <div role="main" class="ui-content" id ="maindiv" style= "overflow: auto"> Will load diff pages here. </div><!-- /content --> <div data-role="panel" id="left-panel" data-theme="b"> <ul data-role="listview" data-icon="false" id="menu"> <li> <a href="#" id = "btnA" data-rel="close">Go Page A <img src="./images/icona.png" class="ui-li-thumb"/> </li> <li> <a href="#" id = "btnB" data-rel="close">Go Page B <img src="./images/iconb.png" class="ui-li-thumb"/> </li> </ul> </div><!-- /panel --> <script type="text/javascript"> $("#btnA").on("click", function(){ $("#maindiv").empty(); $("#maindiv").load("pageA.html"); }); $("#btnB").on("click", function(){ $("#maindiv").empty(); $("#maindiv").load("pageB.html"); }); </script> </div><!-- /page --> </body> </html> Next is code for the page I try to load dynamically. <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <title>Page should be loaded</title> <link rel="stylesheet" href="http://fonts.googleapis.com/css?family=Open+Sans:300,400,700"> <link rel="stylesheet" href="./css/customizedstyle.css"> <link rel="stylesheet" href="./css/themes/default/jquery.mobile-1.4.3.min.css"> <link rel="stylesheet" href="./css/jqm-demos.css"> <script src="./js/jquery.js"></script> <script scr="./js/customizedjs.js"></script> <script src="./js/jquery.mobile-1.4.3.min.js"></script> <script> $(document).on('pagebeforeshow', function () { $('#postlist').append('<li> abc </li>'); $('#postlist').listview('refresh'); }); </script> </head> <body > <div data-role="page" id="posthome"> <div data-role = "content"> <ul data-role='listview' id = "postlist"> </ul> </div> </div> </body> </html> I doubt if it is because my javascript in the page doesn't work, cause the swipe js code in the main page seems not work either. Is that possible? I have enabled javascript in the onCreate() function of the activity file as below. protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_message); new LongRunningGetIO().execute(); mWebView = (WebView) findViewById(R.id.webview); mWebView.setWebViewClient(new AppClient()); mWebView.setVerticalScrollBarEnabled(false); mWebView.getSettings().setJavaScriptEnabled(true); mWebView.getSettings().setDomStorageEnabled(true); mWebView.loadUrl("file:///android_asset/index.html"); } I noticed there is a warning for statement to enable javascript "Using setJavaScriptEnabled can introduce XSS vulnerabilities into you application, review carefully". Will that maybe the reason? Then, I added @SuppressLint("SetJavaScriptEnabled") on top of the activity. The warning is gone, but the js code in pages seem still not work.

    Read the article

  • unmet dependencies in Ubuntu 12.04

    - by lee.O
    I tried today to install a dvb-card on my Ubuntu 12.04 (Linux blauhai-linux 3.2.0-25-generic #40-Ubuntu SMP Wed May 23 20:30:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux ). The installation failed with an error. After that, i tried to install python (it was already installed but i got this error): linux:~$ sudo apt-get install git Reading package lists... Done Building dependency tree Reading state information... Done git is already the newest version. You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: python-glade2:i386 : Depends: python:i386 (< 2.5) but it is not going to be installed Depends: python-support:i386 (= 0.3.4) but it is not installable Depends: python:i386 (= 2.4) but it is not going to be installed Depends: libglade2-0:i386 (= 1:2.5.1) but it is not going to be installed Depends: python-gtk2:i386 (= 2.8.6-8) but it is not going to be installed python-numeric:i386 : Depends: python:i386 (< 2.5) but it is not going to be installed Depends: python:i386 (= 2.3) but it is not going to be installed Depends: python-central:i386 (= 0.5.7) but it is not installable E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). well, i can read and tried the proposed command, but then i get this: linux:~$ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: libopenal1:i386 libsdl-ttf2.0-0:i386 libkrb5-3:i386 libgconf-2-4:i386 libsm-dev libatk1.0-0:i386 libk5crypto3:i386 libstdc++5:i386 libqt4-declarative:i386 libxcomposite1:i386 libice-dev libgail18:i386 libldap-2.4-2:i386 libao-common libv4l-0:i386 liblcms1:i386 libqt4-qt3support:i386 libroken18-heimdal:i386 libunistring0:i386 libcupsimage2:i386 libgphoto2-port0:i386 libidn11:i386 libnss3:i386 libcaca0:i386 gtk2-engines:i386 libgudev-1.0-0:i386 libjpeg-turbo8:i386 libpthread-stubs0 libcairo-gobject2:i386 libavc1394-0:i386 libjpeg8:i386 libotr2 libaio1:i386 libsane:i386 odbcinst1debian2 odbcinst1debian2:i386 libqt4-test:i386 libqt4-script:i386 libqt4-designer:i386 libsdl-mixer1.2:i386 libqt4-network:i386 libqt4-dbus:i386 libcap2:i386 libproxy1:i386 ibus-gtk:i386 libdbus-glib-1-2:i386 libtdb1:i386 libasn1-8-heimdal:i386 libspeex1:i386 libxslt1.1:i386 libgomp1:i386 libcapi20-3:i386 libibus-1.0-0:i386 libcairo2:i386 libgnutls26:i386 libopenal-data odbcinst libgssapi3-heimdal:i386 libcanberra0:i386 libtasn1-3:i386 libfreetype6:i386 x11proto-kb-dev gtk2-engines-murrine:i386 libwavpack1:i386 libqt4-opengl:i386 libsoup-gnome2.4-1:i386 libv4lconvert0:i386 gstreamer0.10-plugins-good:i386 libc6-i386 lib32gcc1 libqt4-xmlpatterns:i386 librsvg2-common:i386 libdatrie1:i386 xtrans-dev libavahi-common-data:i386 libiec61883-0:i386 lib32asound2 libgdk-pixbuf2.0-0:i386 libsdl-image1.2:i386 libp11-kit0:i386 x11proto-input-dev libwind0-heimdal:i386 libpixman-1-0:i386 libsdl1.2debian:i386 libxaw7:i386 libgdbm3:i386 libcups2:i386 libcurl3:i386 libqtcore4:i386 libxinerama1:i386 libesd0:i386 libmikmod2:i386 libkrb5support0:i386 libxft2:i386 libxt-dev libcroco3:i386 libpulse-mainloop-glib0:i386 libice6:i386 libaa1:i386 libieee1284-3:i386 libgcrypt11:i386 libthai0:i386 libao4:i386 libkeyutils1:i386 libxmu6:i386 libcanberra-gtk0:i386 libvorbisfile3:i386 libqt4-sql:i386 esound-common libxpm4:i386 libqt4-svg:i386 libusb-0.1-4:i386 libgail-common:i386 libxrender1:i386 libhcrypto4-heimdal:i386 libraw1394-11:i386 libnspr4:i386 libshout3:i386 libdv4:i386 libhx509-5-heimdal:i386 libxau-dev libqt4-xml:i386 gstreamer0.10-x:i386 libgettextpo0:i386 libxss1:i386 libgd2-xpm:i386 libheimbase1-heimdal:i386 libtiff4:i386 libsdl-net1.2:i386 libjasper1:i386 libgnome-keyring0:i386 libxtst6:i386 gtk2-engines-pixbuf:i386 libqtgui4:i386 libtag1c2a:i386 librsvg2-2:i386 libavahi-client3:i386 libssl0.9.8:i386 libmpg123-0:i386 libmad0:i386 libsasl2-2:i386 xorg-sgml-doctools libgsoap1 gtk2-engines-oxygen:i386 libfontconfig1:i386 xaw3dg:i386 libpango1.0-0:i386 libsm6:i386 libx11-dev libheimntlm0-heimdal:i386 libpulsedsp:i386 lib32stdc++6 libx11-doc libqt4-sql-mysql:i386 libxcb-render0:i386 libodbc1:i386 libexif12:i386 libqt4-scripttools:i386 librtmp0:i386 libgssapi-krb5-2:i386 libxi6:i386 libqtwebkit4:i386 libxcb1-dev libxp6:i386 libaudio2:i386 libxcursor1:i386 libxcb-shm0:i386 libxt6:i386 libxv1:i386 libsasl2-modules:i386 libavahi-common3:i386 libxrandr2:i386 x11proto-core-dev libsqlite3-0:i386 libmng1:i386 libgtk2.0-0:i386 libxdmcp-dev libpthread-stubs0-dev libltdl7:i386 libkrb5-26-heimdal:i386 libssl1.0.0:i386 glib-networking:i386 libgpg-error0:i386 libsoup2.4-1:i386 libgphoto2-2:i386 libtag1-vanilla:i386 libaudiofile1:i386 libglade2-0:i386 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: default-jre default-jre-headless icedtea-6-jre-cacao icedtea-6-jre-jamvm icedtea-netx icedtea-netx-common libglade2-0:i386 libpython3.2 openjdk-6-jre openjdk-6-jre-headless openjdk-6-jre-lib python3 python3-minimal python3-uno python3.2 python3.2-minimal Suggested packages: icedtea-plugin sun-java6-fonts fonts-ipafont-gothic fonts-ipafont-mincho ttf-telugu-fonts ttf-oriya-fonts ttf-kannada-fonts ttf-bengali-fonts python3-doc python3-tk python3.2-doc binfmt-support The following packages will be REMOVED: activity-log-manager-control-center aisleriot alacarte apparmor apport apport-gtk apt-xapian-index aptdaemon apturl apturl-common bluez bluez-alsa bluez-alsa:i386 bluez-gstreamer checkbox checkbox-qt command-not-found compiz compiz-gnome compiz-plugins-main-default compizconfig-backend-gconf deja-dup duplicity eog evolution-data-server firefox firefox-globalmenu firefox-gnome-support foomatic-db-compressed-ppds gconf-editor gconf2 gdb gedit gir1.2-mutter-3.0 gir1.2-peas-1.0 gir1.2-rb-3.0 gir1.2-totem-1.0 gir1.2-ubuntuoneui-3.0 gksu gnome-applets gnome-applets-data gnome-bluetooth gnome-contacts gnome-control-center gnome-media gnome-menus gnome-orca gnome-panel gnome-panel-data gnome-session-fallback gnome-shell gnome-sudoku gnome-terminal gnome-terminal-data gnome-themes-standard gnome-tweak-tool gnome-user-share gstreamer0.10-gconf gwibber gwibber-service gwibber-service-facebook gwibber-service-identica gwibber-service-twitter hplip hplip-data ia32-libs ia32-libs-multiarch:i386 ibus ibus-pinyin ibus-table indicator-datetime indicator-power jockey-common jockey-gtk landscape-client-ui-install language-selector-common language-selector-gnome launchpad-integration libcanberra-gtk-module libcanberra-gtk-module:i386 libcanberra-gtk3-module libcompizconfig0 libfolks-eds25 libgksu2-0 libgnome-media-profiles-3.0-0 libgnome2-0 libgnome2-common libgnomevfs2-0 libgnomevfs2-common libgweather-3-0 libgweather-common libgwibber-gtk2 libgwibber2 libmetacity-private0 libmutter0 libpeas-1.0-0 libpurple-bin libpython2.7 libreoffice-gnome librhythmbox-core5 libsyncdaemon-1.0-1 libtotem0 libubuntuoneui-3.0-1 light-themes lsb-release metacity metacity-common mutter-common nautilus-dropbox nautilus-share network-manager-gnome nvidia-common nvidia-settings nvidia-settings-updates onboard oneconf openjdk-7-jdk openjdk-7-jre openprinting-ppds pidgin pidgin-libnotify pidgin-otr printer-driver-foo2zjs printer-driver-ptouch printer-driver-pxljr printer-driver-sag-gdi printer-driver-splix python python-appindicator python-apport python-apt python-apt-common python-aptdaemon python-aptdaemon.gtk3widgets python-aptdaemon.pkcompat python-brlapi python-cairo python-central python-chardet python-configglue python-crypto python-cups python-cupshelpers python-dateutil python-dbus python-debian python-debtagshw python-defer python-dirspec python-egenix-mxdatetime python-egenix-mxtools python-gconf python-gdbm python-gi python-gi-cairo python-glade2:i386 python-gmenu python-gnomekeyring python-gnupginterface python-gobject python-gobject-2 python-gpgme python-gst0.10 python-gtk2 python-httplib2 python-ibus python-imaging python-keyring python-launchpadlib python-lazr.restfulclient python-lazr.uri python-libproxy python-libxml2 python-louis python-mako python-markupsafe python-minimal python-notify python-numeric:i386 python-oauth python-openssl python-packagekit python-pam python-pexpect python-piston-mini-client python-pkg-resources python-problem-report python-protobuf python-pyatspi2 python-pycurl python-pyinotify python-renderpm python-reportlab python-reportlab-accel python-serial python-simplejson python-smbc python-software-properties python-speechd python-twisted-bin python-twisted-core python-twisted-names python-twisted-web python-ubuntu-sso-client python-ubuntuone-client python-ubuntuone-control-panel python-ubuntuone-storageprotocol python-uno python-virtkey python-wadllib python-xapian python-xdg python-xkit python-zeitgeist python-zope.interface python2.7 python2.7-minimal rhythmbox rhythmbox-mozilla rhythmbox-plugin-cdrecorder rhythmbox-plugin-magnatune rhythmbox-plugin-zeitgeist rhythmbox-plugins rhythmbox-ubuntuone screen-resolution-extra sessioninstaller skype software-center software-center-aptdaemon-plugins software-properties-common software-properties-gtk system-config-printer-common system-config-printer-gnome system-config-printer-udev texlive-extra-utils totem totem-mozilla totem-plugins ubuntu-artwork ubuntu-desktop ubuntu-minimal ubuntu-sso-client ubuntu-sso-client-gtk ubuntu-standard ubuntu-system-service ubuntuone-client ubuntuone-client-gnome ubuntuone-control-panel ubuntuone-couch ubuntuone-installer ufw unattended-upgrades unity unity-2d unity-common unity-lens-applications unity-lens-video unity-scope-musicstores unity-scope-video-remote update-manager update-manager-core update-notifier update-notifier-common usb-creator-common usb-creator-gtk virtualbox virtualbox-dkms virtualbox-qt xdiagnose xul-ext-ubufox zeitgeist zeitgeist-core zeitgeist-datahub The following NEW packages will be installed: default-jre default-jre-headless icedtea-6-jre-cacao icedtea-6-jre-jamvm icedtea-netx icedtea-netx-common libglade2-0:i386 libpython3.2 openjdk-6-jre openjdk-6-jre-headless openjdk-6-jre-lib python3 python3-minimal python3-uno python3.2 python3.2-minimal WARNING: The following essential packages will be removed. This should NOT be done unless you know exactly what you are doing! python-minimal python2.7-minimal (due to python-minimal) 0 upgraded, 16 newly installed, 273 to remove and 0 not upgraded. 2 not fully installed or removed. Need to get 39.1 MB of archives. After this operation, 324 MB disk space will be freed. You are about to do something potentially harmful. To continue type in the phrase 'Yes, do as I say!' ?] Thats not good, is it?! Should i run this command or should i run another command to fix this problem? Would be great if somebody can help me. :) Thanks in advance. best regards

    Read the article

  • Parallelism in .NET – Part 18, Task Continuations with Multiple Tasks

    - by Reed
    In my introduction to Task continuations I demonstrated how the Task class provides a more expressive alternative to traditional callbacks.  Task continuations provide a much cleaner syntax to traditional callbacks, but there are other reasons to switch to using continuations… Task continuations provide a clean syntax, and a very simple, elegant means of synchronizing asynchronous method results with the user interface.  In addition, continuations provide a very simple, elegant means of working with collections of tasks. Prior to .NET 4, working with multiple related asynchronous method calls was very tricky.  If, for example, we wanted to run two asynchronous operations, followed by a single method call which we wanted to run when the first two methods completed, we’d have to program all of the handling ourselves.  We would likely need to take some approach such as using a shared callback which synchronized against a common variable, or using a WaitHandle shared within the callbacks to allow one to wait for the second.  Although this could be accomplished easily enough, it requires manually placing this handling into every algorithm which requires this form of blocking.  This is error prone, difficult, and can easily lead to subtle bugs. Similar to how the Task class static methods providing a way to block until multiple tasks have completed, TaskFactory contains static methods which allow a continuation to be scheduled upon the completion of multiple tasks: TaskFactory.ContinueWhenAll. This allows you to easily specify a single delegate to run when a collection of tasks has completed.  For example, suppose we have a class which fetches data from the network.  This can be a long running operation, and potentially fail in certain situations, such as a server being down.  As a result, we have three separate servers which we will “query” for our information.  Now, suppose we want to grab data from all three servers, and verify that the results are the same from all three. With traditional asynchronous programming in .NET, this would require using three separate callbacks, and managing the synchronization between the various operations ourselves.  The Task and TaskFactory classes simplify this for us, allowing us to write: var server1 = Task.Factory.StartNew( () => networkClass.GetResults(firstServer) ); var server2 = Task.Factory.StartNew( () => networkClass.GetResults(secondServer) ); var server3 = Task.Factory.StartNew( () => networkClass.GetResults(thirdServer) ); var result = Task.Factory.ContinueWhenAll( new[] {server1, server2, server3 }, (tasks) => { // Propogate exceptions (see below) Task.WaitAll(tasks); return this.CompareTaskResults( tasks[0].Result, tasks[1].Result, tasks[2].Result); }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This is clean, simple, and elegant.  The one complication is the Task.WaitAll(tasks); statement. Although the continuation will not complete until all three tasks (server1, server2, and server3) have completed, there is a potential snag.  If the networkClass.GetResults method fails, and raises an exception, we want to make sure to handle it cleanly.  By using Task.WaitAll, any exceptions raised within any of our original tasks will get wrapped into a single AggregateException by the WaitAll method, providing us a simplified means of handling the exceptions.  If we wait on the continuation, we can trap this AggregateException, and handle it cleanly.  Without this line, it’s possible that an exception could remain uncaught and unhandled by a task, which later might trigger a nasty UnobservedTaskException.  This would happen any time two of our original tasks failed. Just as we can schedule a continuation to occur when an entire collection of tasks has completed, we can just as easily setup a continuation to run when any single task within a collection completes.  If, for example, we didn’t need to compare the results of all three network locations, but only use one, we could still schedule three tasks.  We could then have our completion logic work on the first task which completed, and ignore the others.  This is done via TaskFactory.ContinueWhenAny: var server1 = Task.Factory.StartNew( () => networkClass.GetResults(firstServer) ); var server2 = Task.Factory.StartNew( () => networkClass.GetResults(secondServer) ); var server3 = Task.Factory.StartNew( () => networkClass.GetResults(thirdServer) ); var result = Task.Factory.ContinueWhenAny( new[] {server1, server2, server3 }, (firstTask) => { return this.ProcessTaskResult(firstTask.Result); }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, instead of working with all three tasks, we’re just using the first task which finishes.  This is very useful, as it allows us to easily work with results of multiple operations, and “throw away” the others.  However, you must take care when using ContinueWhenAny to properly handle exceptions.  At some point, you should always wait on each task (or use the Task.Result property) in order to propogate any exceptions raised from within the task.  Failing to do so can lead to an UnobservedTaskException.

    Read the article

  • Reporting Services - It's a Wrap!

    - by smisner
    If you have any experience at all with Reporting Services, you have probably developed a report using the matrix data region. It's handy when you want to generate columns dynamically based on data. If users view a matrix report online, they can scroll horizontally to view all columns and all is well. But if they want to print the report, the experience is completely different and you'll have to decide how you want to handle dynamic columns. By default, when a user prints a matrix report for which the number of columns exceeds the width of the page, Reporting Services determines how many columns can fit on the page and renders one or more separate pages for the additional columns. In this post, I'll explain two techniques for managing dynamic columns. First, I'll show how to use the RepeatRowHeaders property to make it easier to read a report when columns span multiple pages, and then I'll show you how to "wrap" columns so that you can avoid the horizontal page break. Included with this post are the sample RDLs for download. First, let's look at the default behavior of a matrix. A matrix that has too many columns for one printed page (or output to page-based renderer like PDF or Word) will be rendered such that the first page with the row group headers and the inital set of columns, as shown in Figure 1. The second page continues by rendering the next set of columns that can fit on the page, as shown in Figure 2.This pattern continues until all columns are rendered. The problem with the default behavior is that you've lost the context of employee and sales order - the row headers - on the second page. That makes it hard for users to read this report because the layout requires them to flip back and forth between the current page and the first page of the report. You can fix this behavior by finding the RepeatRowHeaders of the tablix report item and changing its value to True. The second (and subsequent pages) of the matrix now look like the image shown in Figure 3. The problem with this approach is that the number of printed pages to flip through is unpredictable when you have a large number of potential columns. What if you want to include all columns on the same page? You can take advantage of the repeating behavior of a tablix and get repeating columns by embedding one tablix inside of another. For this example, I'm using SQL Server 2008 R2 Reporting Services. You can get similar results with SQL Server 2008. (In fact, you could probably do something similar in SQL Server 2005, but I haven't tested it. The steps would be slightly different because you would be working with the old-style matrix as compared to the new-style tablix discussed in this post.) I created a dataset that queries AdventureWorksDW2008 tables: SELECT TOP (100) e.LastName + ', ' + e.FirstName AS EmployeeName, d.FullDateAlternateKey, f.SalesOrderNumber, p.EnglishProductName, sum(SalesAmount) as SalesAmount FROM FactResellerSales AS f INNER JOIN DimProduct AS p ON p.ProductKey = f.ProductKey INNER JOIN DimDate AS d ON d.DateKey = f.OrderDateKey INNER JOIN DimEmployee AS e ON e.EmployeeKey = f.EmployeeKey GROUP BY p.EnglishProductName, d.FullDateAlternateKey, e.LastName + ', ' + e.FirstName, f.SalesOrderNumber ORDER BY EmployeeName, f.SalesOrderNumber, p.EnglishProductName To start the report: Add a matrix to the report body and drag Employee Name to the row header, which also creates a group. Next drag SalesOrderNumber below Employee Name in the Row Groups panel, which creates a second group and a second column in the row header section of the matrix, as shown in Figure 4. Now for some trickiness. Add another column to the row headers. This new column will be associated with the existing EmployeeName group rather than causing BIDS to create a new group. To do this, right-click on the EmployeeName textbox in the bottom row, point to Insert Column, and then click Inside Group-Right. Then add the SalesOrderNumber field to this new column. By doing this, you're creating a report that repeats a set of columns for each EmployeeName/SalesOrderNumber combination that appears in the data. Next, modify the first row group's expression to group on both EmployeeName and SalesOrderNumber. In the Row Groups section, right-click EmployeeName, click Group Properties, click the Add button, and select [SalesOrderNumber]. Now you need to configure the columns to repeat. Rather than use the Columns group of the matrix like you might expect, you're going to use the textbox that belongs to the second group of the tablix as a location for embedding other report items. First, clear out the text that's currently in the third column - SalesOrderNumber - because it's already added as a separate textbox in this report design. Then drag and drop a matrix into that textbox, as shown in Figure 5. Again, you need to do some tricks here to get the appearance and behavior right. We don't really want repeating rows in the embedded matrix, so follow these steps: Click on the Rows label which then displays RowGroup in the Row Groups pane below the report body. Right-click on RowGroup,click Delete Group, and select the option to delete associated rows and columns. As a result, you get a modified matrix which has only a ColumnGroup in it, with a row above a double-dashed line for the column group and a row below the line for the aggregated data. Let's continue: Drag EnglishProductName to the data textbox (below the line). Add a second data row by right-clicking EnglishProductName, pointing to Insert Row, and clicking Below. Add the SalesAmount field to the new data textbox. Now eliminate the column group row without eliminating the group. To do this, right-click the row above the double-dashed line, click Delete Rows, and then select Delete Rows Only in the message box. Now you're ready for the fit and finish phase: Resize the column containing the embedded matrix so that it fits completely. Also, the final column in the matrix is for the column group. You can't delete this column, but you can make it as small as possible. Just click on the matrix to display the row and column handles, and then drag the right edge of the rightmost column to the left to make the column virtually disappear. Next, configure the groups so that the columns of the embedded matrix will wrap. In the Column Groups pane, right-click ColumnGroup1 and click on the expression button (labeled fx) to the right of Group On [EnglishProductName]. Replace the expression with the following: =RowNumber("SalesOrderNumber" ). We use SalesOrderNumber here because that is the name of the group that "contains" the embedded matrix. The next step is to configure the number of columns to display before wrapping. Click any cell in the matrix that is not inside the embedded matrix, and then double-click the second group in the Row Groups pane - SalesOrderNumber. Change the group expression to the following expression: =Ceiling(RowNumber("EmployeeName")/3) The last step is to apply formatting. In my example, I set the SalesAmount textbox's Format property to C2 and also right-aligned the text in both the EnglishProductName and the SalesAmount textboxes. And voila - Figure 6 shows a matrix report with wrapping columns. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Soapi.CS : A fully relational fluent .NET Stack Exchange API client library

    - by Sky Sanders
    Soapi.CS for .Net / Silverlight / Windows Phone 7 / Mono as easy as breathing...: var context = new ApiContext(apiKey).Initialize(false); Question thisPost = context.Official .StackApps .Questions.ById(386) .WithComments(true) .First(); Console.WriteLine(thisPost.Title); thisPost .Owner .Questions .PageSize(5) .Sort(PostSort.Votes) .ToList() .ForEach(q=> { Console.WriteLine("\t" + q.Score + "\t" + q.Title); q.Timeline.ToList().ForEach(t=> Console.WriteLine("\t\t" + t.TimelineType + "\t" + t.Owner.DisplayName)); Console.WriteLine(); }); // if you can think it, you can get it. Output Soapi.CS : A fully relational fluent .NET Stack Exchange API client library 21 Soapi.CS : A fully relational fluent .NET Stack Exchange API client library Revision code poet Revision code poet Votes code poet Votes code poet Revision code poet Revision code poet Revision code poet Votes code poet Votes code poet Votes code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Votes code poet Comment code poet Revision code poet Votes code poet Revision code poet Revision code poet Revision code poet Answer code poet Revision code poet Revision code poet 14 SOAPI-WATCH: A realtime service that notifies subscribers via twitter when the API changes in any way. Votes code poet Revision code poet Votes code poet Comment code poet Comment code poet Comment code poet Votes lfoust Votes code poet Comment code poet Comment code poet Comment code poet Comment code poet Revision code poet Comment lfoust Votes code poet Revision code poet Votes code poet Votes lfoust Votes code poet Revision code poet Comment Dave DeLong Revision code poet Revision code poet Votes code poet Comment lfoust Comment Dave DeLong Comment lfoust Comment lfoust Comment Dave DeLong Revision code poet 11 SOAPI-EXPLORE: Self-updating single page JavaSript API test harness Votes code poet Votes code poet Votes code poet Votes code poet Votes code poet Comment code poet Revision code poet Votes code poet Revision code poet Revision code poet Revision code poet Comment code poet Revision code poet Votes code poet Comment code poet Question code poet Votes code poet 11 Soapi.JS V1.0: fluent JavaScript wrapper for the StackOverflow API Comment George Edison Comment George Edison Comment George Edison Comment George Edison Comment George Edison Comment George Edison Answer George Edison Votes code poet Votes code poet Votes code poet Votes code poet Revision code poet Revision code poet Answer code poet Comment code poet Revision code poet Comment code poet Comment code poet Comment code poet Revision code poet Revision code poet Votes code poet Votes code poet Votes code poet Votes code poet Comment code poet Comment code poet Comment code poet Comment code poet Comment code poet 9 SOAPI-DIFF: Your app broke? Check SOAPI-DIFF to find out what changed in the API Votes code poet Revision code poet Comment Dennis Williamson Answer Dennis Williamson Votes code poet Votes Dennis Williamson Comment code poet Question code poet Votes code poet About A robust, fully relational, easy to use, strongly typed, end-to-end StackOverflow API Client Library. Out of the box, Soapi provides you with a robust client library that abstracts away most all of the messy details of consuming the API and lets you concentrate on implementing your ideas. A few features include: A fully relational model of the API data set exposed via a fully 'dot navigable' IEnumerable (LINQ) implementation. Simply tell Soapi what you want and it will get it for you. e.g. "On my first question, from the author of the first comment, get the first page of comments by that person on any post" my.Questions.First().Comments.First().Owner.Comments.ToList(); (yes this is a real expression that returns the data as expressed!) Full coverage of the API, all routes and all parameters with an intuitive syntax. Strongly typed Domain Data Objects for all API data structures. Eager and Lazy Loading of 'stub' objects. Eager\Lazy loading may be disabled. When finer grained control of requests is desired, the core RouteMap objects may be leveraged to request data from any of the API paths using all available parameters as documented on the help pages. A rich Asynchronous implementation. A configurable request cache to reduce unnecessary network traffic and to simplify your usage logic. There is no need to go out of your way to be frugal. You may set a distinct cache duration for any particular route. A configurable request throttle to ensure compliance with the api terms of usage and to simplify your code in that you do not have to worry about and respond to 50X errors. The RequestCache and Throttled Queue are thread-safe, so can make as many requests as you like from as many threads as you like as fast as you like and not worry about abusing the api or having to write reams of management/compensation code. Configurable retry threshold that will, by default, make up to 3 attempts to retrieve a request before failing. Every request made by Soapi is properly formed and directed so most any http error will be the result of a timeout or other network infrastructure. A retry buffer provides a level of fault tolerance that you can rely on. An almost identical javascript library, Soapi.JS, and it's full figured big brother, Soapi.JS2, that will enable you to leverage your server cycles and bandwidth for only those tasks that require it and offload things like status updates to the client's browser. License Licensed GPL Version 2 license. Why is Soapi.CS GPL? Can I get an LGPL license for Soapi.CS? (hint: probably) Platforms .NET 3.5 .NET 4.0 Silverlight 3 Silverlight 4 Windows Phone 7 Mono Download Source code lives @ http://soapics.codeplex.com. Binary releases are forthcoming. codeplex is acting up again. get the source and binaries @ http://bitbucket.org/bitpusher/soapi.cs/downloads The source is C# 3.5. and includes projects and solutions for the following IDEs Visual Studio 2008 Visual Studio 2010 ModoDevelop 2.4 Documentation Full documentation is available at http://soapi.info/help/cs/index.aspx Sample Code / Usage Examples Sample code and usage examples will be added as answers to this question. Full API Coverage all API routes are covered Full Parameter Parity If the API exposes it, Soapi giftwraps it for you. Building a simple app with Soapi.CS - a simple app that gathers all traces of a user in the whole stackiverse. Fluent Configuration - Setting up a Soapi.ApiContext could not be easier Bulk Data Import - A tiny app that quickly loads a SQLite data file with all users in the stackiverse. Paged Results - Soapi.CS transparently handles multi-page operations. Asynchronous Requests - Soapi.CS provides a rich asynchronous model that is especially useful when writing api apps in Silverlight or Windows Phone 7. Caching and Throttling - how and why Apps that use Soapi.CS Soapi.FindUser - .net utility for locating a user anywhere in the stackiverse Soapi.Explore - The entire API at your command Soapi.LastSeen - List users by last access time Add your app/site here - I know you are out there ;-) if you are not comfortable editing this post, simply add a comment and I will add it. The CS/SL/WP7/MONO libraries all compile the same code and with the exception of environmental considerations of Silverlight, the code samples are valid for all libraries. You may also find guidance in the test suites. More information on the SOAPI eco-system. Contact This library is currently the effort of me, Sky Sanders (code poet) and can be reached at gmail - sky.sanders Any who are interested in improving this library are welcome. Support Soapi You can help support this project by voting for Soapi's Open Source Ad post For more information about the origins of Soapi.CS and the rest of the Soapi eco-system see What is Soapi and why should I care?

    Read the article

  • Enterprise Library Logging / Exception handling and Postsharp

    - by subodhnpushpak
    One of my colleagues came-up with a unique situation where it was required to create log files based on the input file which is uploaded. For example if A.xml is uploaded, the corresponding log file should be A_log.txt. I am a strong believer that Logging / EH / caching are cross-cutting architecture aspects and should be least invasive to the business-logic written in enterprise application. I have been using Enterprise Library for logging / EH (i use to work with Avanade, so i have affection towards the library!! :D ). I have been also using excellent library called PostSharp for cross cutting aspect. Here i present a solution with and without PostSharp all in a unit test. Please see full source code at end of the this blog post. But first, we need to tweak the enterprise library so that the log files are created at runtime based on input given. Below is Custom trace listner which writes log into a given file extracted out of Logentry extendedProperties property. using Microsoft.Practices.EnterpriseLibrary.Common.Configuration; using Microsoft.Practices.EnterpriseLibrary.Logging.Configuration; using Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners; using Microsoft.Practices.EnterpriseLibrary.Logging; using System.IO; using System.Text; using System; using System.Diagnostics;   namespace Subodh.Framework.Logging { [ConfigurationElementType(typeof(CustomTraceListenerData))] public class LogToFileTraceListener : CustomTraceListener {   private static object syncRoot = new object();   public override void TraceData(TraceEventCache eventCache, string source, TraceEventType eventType, int id, object data) {   if ((data is LogEntry) & this.Formatter != null) { WriteOutToLog(this.Formatter.Format((LogEntry)data), (LogEntry)data); } else { WriteOutToLog(data.ToString(), (LogEntry)data); } }   public override void Write(string message) { Debug.Print(message.ToString()); }   public override void WriteLine(string message) { Debug.Print(message.ToString()); }   private void WriteOutToLog(string BodyText, LogEntry logentry) { try { //Get the filelocation from the extended properties if (logentry.ExtendedProperties.ContainsKey("filelocation")) { string fullPath = Path.GetFullPath(logentry.ExtendedProperties["filelocation"].ToString());   //Create the directory where the log file is written to if it does not exist. DirectoryInfo directoryInfo = new DirectoryInfo(Path.GetDirectoryName(fullPath));   if (directoryInfo.Exists == false) { directoryInfo.Create(); }   //Lock the file to prevent another process from using this file //as data is being written to it.   lock (syncRoot) { using (FileStream fs = new FileStream(fullPath, FileMode.Append, FileAccess.Write, FileShare.Write, 4096, true)) { using (StreamWriter sw = new StreamWriter(fs, Encoding.UTF8)) { Log(BodyText, sw); sw.Close(); } fs.Close(); } } } } catch (Exception ex) { throw new LoggingException(ex.Message, ex); } }   /// <summary> /// Write message to named file /// </summary> public static void Log(string logMessage, TextWriter w) { w.WriteLine("{0}", logMessage); } } }   The above can be “plugged into” the code using below configuration <loggingConfiguration name="Logging Application Block" tracingEnabled="true" defaultCategory="Trace" logWarningsWhenNoCategoriesMatch="true"> <listeners> <add listenerDataType="Microsoft.Practices.EnterpriseLibrary.Logging.Configuration.CustomTraceListenerData, Microsoft.Practices.EnterpriseLibrary.Logging, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" traceOutputOptions="None" filter="All" type="Subodh.Framework.Logging.LogToFileTraceListener, Subodh.Framework.Logging, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" name="Subodh Custom Trace Listener" initializeData="" formatter="Text Formatter" /> </listeners> Similarly we can use PostSharp to expose the above as cross cutting aspects as below using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Reflection; using PostSharp.Laos; using System.Diagnostics; using GC.FrameworkServices.ExceptionHandler; using Subodh.Framework.Logging;   namespace Subodh.Framework.ExceptionHandling { [Serializable] public sealed class LogExceptionAttribute : OnExceptionAspect { private string prefix; private MethodFormatStrings formatStrings;   // This field is not serialized. It is used only at compile time. [NonSerialized] private readonly Type exceptionType; private string fileName;   /// <summary> /// Declares a <see cref="XTraceExceptionAttribute"/> custom attribute /// that logs every exception flowing out of the methods to which /// the custom attribute is applied. /// </summary> public LogExceptionAttribute() { }   /// <summary> /// Declares a <see cref="XTraceExceptionAttribute"/> custom attribute /// that logs every exception derived from a given <see cref="Type"/> /// flowing out of the methods to which /// the custom attribute is applied. /// </summary> /// <param name="exceptionType"></param> public LogExceptionAttribute( Type exceptionType ) { this.exceptionType = exceptionType; }   public LogExceptionAttribute(Type exceptionType, string fileName) { this.exceptionType = exceptionType; this.fileName = fileName; }   /// <summary> /// Gets or sets the prefix string, printed before every trace message. /// </summary> /// <value> /// For instance <c>[Exception]</c>. /// </value> public string Prefix { get { return this.prefix; } set { this.prefix = value; } }   /// <summary> /// Initializes the current object. Called at compile time by PostSharp. /// </summary> /// <param name="method">Method to which the current instance is /// associated.</param> public override void CompileTimeInitialize( MethodBase method ) { // We just initialize our fields. They will be serialized at compile-time // and deserialized at runtime. this.formatStrings = Formatter.GetMethodFormatStrings( method ); this.prefix = Formatter.NormalizePrefix( this.prefix ); }   public override Type GetExceptionType( MethodBase method ) { return this.exceptionType; }   /// <summary> /// Method executed when an exception occurs in the methods to which the current /// custom attribute has been applied. We just write a record to the tracing /// subsystem. /// </summary> /// <param name="context">Event arguments specifying which method /// is being called and with which parameters.</param> public override void OnException( MethodExecutionEventArgs context ) { string message = String.Format("{0}Exception {1} {{{2}}} in {{{3}}}. \r\n\r\nStack Trace {4}", this.prefix, context.Exception.GetType().Name, context.Exception.Message, this.formatStrings.Format(context.Instance, context.Method, context.GetReadOnlyArgumentArray()), context.Exception.StackTrace); if(!string.IsNullOrEmpty(fileName)) { ApplicationLogger.LogException(message, fileName); } else { ApplicationLogger.LogException(message, Source.UtilityService); } } } } To use the above below is the unit test [TestMethod] [ExpectedException(typeof(NotImplementedException))] public void TestMethod1() { MethodThrowingExceptionForLog(); try { MethodThrowingExceptionForLogWithPostSharp(); } catch (NotImplementedException ex) { throw ex; } }   private void MethodThrowingExceptionForLog() { try { throw new NotImplementedException(); } catch (NotImplementedException ex) { // create file and then write log ApplicationLogger.TraceMessage("this is a trace message which will be logged in Test1MyFile", @"D:\EL\Test1Myfile.txt"); ApplicationLogger.TraceMessage("this is a trace message which will be logged in YetAnotherTest1Myfile", @"D:\EL\YetAnotherTest1Myfile.txt"); } }   // Automatically log details using attributes // Log exception using attributes .... A La WCF [FaultContract(typeof(FaultMessage))] style] [Log(@"D:\EL\Test1MyfileLogPostsharp.txt")] [LogException(typeof(NotImplementedException), @"D:\EL\Test1MyfileExceptionPostsharp.txt")] private void MethodThrowingExceptionForLogWithPostSharp() { throw new NotImplementedException(); } The good thing about the approach is that all the logging and EH is done at centralized location controlled by PostSharp. Of Course, if some other library has to be used instead of EL, it can easily be plugged in. Also, the coder ARE ONLY involved in writing business code in methods, which makes code cleaner. Here is the full source code. The third party assemblies provided are from EL and PostSharp and i presume you will find these useful. Do let me know your thoughts / ideas on the same. Technorati Tags: PostSharp,Enterprize library,C#,Logging,Exception handling

    Read the article

  • Interview with Al-Sorayai Group’s Managing Director on the Oracle Retail deployment

    - by user801960
    Recently, I had the opportunity to speak with Sheik Al Sorayai, Managing Director of the Saudi Arabian carpet and rug manufacturer, the Al-Sorayai Group. His business has recently implemented Oracle® Retail Merchandising and Stores applications in only six months to support the launch of its new furniture retail concept, HomeStyle. With an aggressive growth strategy for the new business in place, the Oracle Retail solutions are enabling Al-Sorayai to coordinate merchandising and store operations and improve decision-making and insight to optimise margins, reduce inventory costs and provide a consistent customer experience.

    Read the article

  • Developing web apps using ASP.NET MVC 3, Razor and EF Code First - Part 1

    - by shiju
    In this post, I will demonstrate web application development using ASP. NET MVC 3, Razor and EF code First. This post will also cover Dependency Injection using Unity 2.0 and generic Repository and Unit of Work for EF Code First. The following frameworks will be used for this step by step tutorial. ASP.NET MVC 3 EF Code First CTP 5 Unity 2.0 Define Domain Model Let’s create domain model for our simple web application Category class public class Category {     public int CategoryId { get; set; }     [Required(ErrorMessage = "Name Required")]     [StringLength(25, ErrorMessage = "Must be less than 25 characters")]     public string Name { get; set;}     public string Description { get; set; }     public virtual ICollection<Expense> Expenses { get; set; } }   Expense class public class Expense {             public int ExpenseId { get; set; }            public string  Transaction { get; set; }     public DateTime Date { get; set; }     public double Amount { get; set; }     public int CategoryId { get; set; }     public virtual Category Category { get; set; } } We have two domain entities - Category and Expense. A single category contains a list of expense transactions and every expense transaction should have a Category. In this post, we will be focusing on CRUD operations for the entity Category and will be working on the Expense entity with a View Model object in the later post. And the source code for this application will be refactored over time. The above entities are very simple POCO (Plain Old CLR Object) classes and the entity Category is decorated with validation attributes in the System.ComponentModel.DataAnnotations namespace. Now we want to use these entities for defining model objects for the Entity Framework 4. Using the Code First approach of Entity Framework, we can first define the entities by simply writing POCO classes without any coupling with any API or database library. This approach lets you focus on domain model which will enable Domain-Driven Development for applications. EF code first support is currently enabled with a separate API that is runs on top of the Entity Framework 4. EF Code First is reached CTP 5 when I am writing this article. Creating Context Class for Entity Framework We have created our domain model and let’s create a class in order to working with Entity Framework Code First. For this, you have to download EF Code First CTP 5 and add reference to the assembly EntitFramework.dll. You can also use NuGet to download add reference to EEF Code First.    public class MyFinanceContext : DbContext {     public MyFinanceContext() : base("MyFinance") { }     public DbSet<Category> Categories { get; set; }     public DbSet<Expense> Expenses { get; set; }         }   The above class MyFinanceContext is derived from DbContext that can connect your model classes to a database. The MyFinanceContext class is mapping our Category and Expense class into database tables Categories and Expenses using DbSet<TEntity> where TEntity is any POCO class. When we are running the application at first time, it will automatically create the database. EF code-first look for a connection string in web.config or app.config that has the same name as the dbcontext class. If it is not find any connection string with the convention, it will automatically create database in local SQL Express database by default and the name of the database will be same name as the dbcontext class. You can also define the name of database in constructor of the the dbcontext class. Unlike NHibernate, we don’t have to use any XML based mapping files or Fluent interface for mapping between our model and database. The model classes of Code First are working on the basis of conventions and we can also use a fluent API to refine our model. The convention for primary key is ‘Id’ or ‘<class name>Id’.  If primary key properties are detected with type ‘int’, ‘long’ or ‘short’, they will automatically registered as identity columns in the database by default. Primary key detection is not case sensitive. We can define our model classes with validation attributes in the System.ComponentModel.DataAnnotations namespace and it automatically enforces validation rules when a model object is updated or saved. Generic Repository for EF Code First We have created model classes and dbcontext class. Now we have to create generic repository pattern for data persistence with EF code first. If you don’t know about the repository pattern, checkout Martin Fowler’s article on Repository Let’s create a generic repository to working with DbContext and DbSet generics. public interface IRepository<T> where T : class     {         void Add(T entity);         void Delete(T entity);         T GetById(long Id);         IEnumerable<T> All();     }   RepositoryBasse – Generic Repository class public abstract class RepositoryBase<T> where T : class { private MyFinanceContext database; private readonly IDbSet<T> dbset; protected RepositoryBase(IDatabaseFactory databaseFactory) {     DatabaseFactory = databaseFactory;     dbset = Database.Set<T>(); }   protected IDatabaseFactory DatabaseFactory {     get; private set; }   protected MyFinanceContext Database {     get { return database ?? (database = DatabaseFactory.Get()); } } public virtual void Add(T entity) {     dbset.Add(entity);            }        public virtual void Delete(T entity) {     dbset.Remove(entity); }   public virtual T GetById(long id) {     return dbset.Find(id); }   public virtual IEnumerable<T> All() {     return dbset.ToList(); } }   DatabaseFactory class public class DatabaseFactory : Disposable, IDatabaseFactory {     private MyFinanceContext database;     public MyFinanceContext Get()     {         return database ?? (database = new MyFinanceContext());     }     protected override void DisposeCore()     {         if (database != null)             database.Dispose();     } } Unit of Work If you are new to Unit of Work pattern, checkout Fowler’s article on Unit of Work . According to Martin Fowler, the Unit of Work pattern "maintains a list of objects affected by a business transaction and coordinates the writing out of changes and the resolution of concurrency problems." Let’s create a class for handling Unit of Work   public interface IUnitOfWork {     void Commit(); }   UniOfWork class public class UnitOfWork : IUnitOfWork {     private readonly IDatabaseFactory databaseFactory;     private MyFinanceContext dataContext;       public UnitOfWork(IDatabaseFactory databaseFactory)     {         this.databaseFactory = databaseFactory;     }       protected MyFinanceContext DataContext     {         get { return dataContext ?? (dataContext = databaseFactory.Get()); }     }       public void Commit()     {         DataContext.Commit();     } }   The Commit method of the UnitOfWork will call the commit method of MyFinanceContext class and it will execute the SaveChanges method of DbContext class.   Repository class for Category In this post, we will be focusing on the persistence against Category entity and will working on other entities in later post. Let’s create a repository for handling CRUD operations for Category using derive from a generic Repository RepositoryBase<T>.   public class CategoryRepository: RepositoryBase<Category>, ICategoryRepository     {     public CategoryRepository(IDatabaseFactory databaseFactory)         : base(databaseFactory)         {         }                } public interface ICategoryRepository : IRepository<Category> { } If we need additional methods than generic repository for the Category, we can define in the CategoryRepository. Dependency Injection using Unity 2.0 If you are new to Inversion of Control/ Dependency Injection or Unity, please have a look on my articles at http://weblogs.asp.net/shijuvarghese/archive/tags/IoC/default.aspx. I want to create a custom lifetime manager for Unity to store container in the current HttpContext.   public class HttpContextLifetimeManager<T> : LifetimeManager, IDisposable {     public override object GetValue()     {         return HttpContext.Current.Items[typeof(T).AssemblyQualifiedName];     }     public override void RemoveValue()     {         HttpContext.Current.Items.Remove(typeof(T).AssemblyQualifiedName);     }     public override void SetValue(object newValue)     {         HttpContext.Current.Items[typeof(T).AssemblyQualifiedName] = newValue;     }     public void Dispose()     {         RemoveValue();     } }   Let’s create controller factory for Unity in the ASP.NET MVC 3 application. public class UnityControllerFactory : DefaultControllerFactory { IUnityContainer container; public UnityControllerFactory(IUnityContainer container) {     this.container = container; } protected override IController GetControllerInstance(RequestContext reqContext, Type controllerType) {     IController controller;     if (controllerType == null)         throw new HttpException(                 404, String.Format(                     "The controller for path '{0}' could not be found" +     "or it does not implement IController.",                 reqContext.HttpContext.Request.Path));       if (!typeof(IController).IsAssignableFrom(controllerType))         throw new ArgumentException(                 string.Format(                     "Type requested is not a controller: {0}",                     controllerType.Name),                     "controllerType");     try     {         controller= container.Resolve(controllerType) as IController;     }     catch (Exception ex)     {         throw new InvalidOperationException(String.Format(                                 "Error resolving controller {0}",                                 controllerType.Name), ex);     }     return controller; }   }   Configure contract and concrete types in Unity Let’s configure our contract and concrete types in Unity for resolving our dependencies.   private void ConfigureUnity() {     //Create UnityContainer               IUnityContainer container = new UnityContainer()                 .RegisterType<IDatabaseFactory, DatabaseFactory>(new HttpContextLifetimeManager<IDatabaseFactory>())     .RegisterType<IUnitOfWork, UnitOfWork>(new HttpContextLifetimeManager<IUnitOfWork>())     .RegisterType<ICategoryRepository, CategoryRepository>(new HttpContextLifetimeManager<ICategoryRepository>());                 //Set container for Controller Factory                ControllerBuilder.Current.SetControllerFactory(             new UnityControllerFactory(container)); }   In the above ConfigureUnity method, we are registering our types onto Unity container with custom lifetime manager HttpContextLifetimeManager. Let’s call ConfigureUnity method in the Global.asax.cs for set controller factory for Unity and configuring the types with Unity.   protected void Application_Start() {     AreaRegistration.RegisterAllAreas();     RegisterGlobalFilters(GlobalFilters.Filters);     RegisterRoutes(RouteTable.Routes);     ConfigureUnity(); }   Developing web application using ASP.NET MVC 3 We have created our domain model for our web application and also have created repositories and configured dependencies with Unity container. Now we have to create controller classes and views for doing CRUD operations against the Category entity. Let’s create controller class for Category Category Controller   public class CategoryController : Controller {     private readonly ICategoryRepository categoryRepository;     private readonly IUnitOfWork unitOfWork;           public CategoryController(ICategoryRepository categoryRepository, IUnitOfWork unitOfWork)     {         this.categoryRepository = categoryRepository;         this.unitOfWork = unitOfWork;     }       public ActionResult Index()     {         var categories = categoryRepository.All();         return View(categories);     }     [HttpGet]     public ActionResult Edit(int id)     {         var category = categoryRepository.GetById(id);         return View(category);     }       [HttpPost]     public ActionResult Edit(int id, FormCollection collection)     {         var category = categoryRepository.GetById(id);         if (TryUpdateModel(category))         {             unitOfWork.Commit();             return RedirectToAction("Index");         }         else return View(category);                 }       [HttpGet]     public ActionResult Create()     {         var category = new Category();         return View(category);     }           [HttpPost]     public ActionResult Create(Category category)     {         if (!ModelState.IsValid)         {             return View("Create", category);         }                     categoryRepository.Add(category);         unitOfWork.Commit();         return RedirectToAction("Index");     }       [HttpPost]     public ActionResult Delete(int  id)     {         var category = categoryRepository.GetById(id);         categoryRepository.Delete(category);         unitOfWork.Commit();         var categories = categoryRepository.All();         return PartialView("CategoryList", categories);       }        }   Creating Views in Razor Now we are going to create views in Razor for our ASP.NET MVC 3 application.  Let’s create a partial view CategoryList.cshtml for listing category information and providing link for Edit and Delete operations. CategoryList.cshtml @using MyFinance.Helpers; @using MyFinance.Domain; @model IEnumerable<Category>      <table>         <tr>         <th>Actions</th>         <th>Name</th>          <th>Description</th>         </tr>     @foreach (var item in Model) {             <tr>             <td>                 @Html.ActionLink("Edit", "Edit",new { id = item.CategoryId })                 @Ajax.ActionLink("Delete", "Delete", new { id = item.CategoryId }, new AjaxOptions { Confirm = "Delete Expense?", HttpMethod = "Post", UpdateTargetId = "divCategoryList" })                           </td>             <td>                 @item.Name             </td>             <td>                 @item.Description             </td>         </tr>          }       </table>     <p>         @Html.ActionLink("Create New", "Create")     </p> The delete link is providing Ajax functionality using the Ajax.ActionLink. This will call an Ajax request for Delete action method in the CategoryCotroller class. In the Delete action method, it will return Partial View CategoryList after deleting the record. We are using CategoryList view for the Ajax functionality and also for Index view using for displaying list of category information. Let’s create Index view using partial view CategoryList  Index.chtml @model IEnumerable<MyFinance.Domain.Category> @{     ViewBag.Title = "Index"; }    <h2>Category List</h2>    <script src="@Url.Content("~/Scripts/jquery.unobtrusive-ajax.min.js")" type="text/javascript"></script>    <div id="divCategoryList">               @Html.Partial("CategoryList", Model) </div>   We can call the partial views using Html.Partial helper method. Now we are going to create View pages for insert and update functionality for the Category. Both view pages are sharing common user interface for entering the category information. So I want to create an EditorTemplate for the Category information. We have to create the EditorTemplate with the same name of entity object so that we can refer it on view pages using @Html.EditorFor(model => model) . So let’s create template with name Category. Let’s create view page for insert Category information   @model MyFinance.Domain.Category   @{     ViewBag.Title = "Save"; }   <h2>Create</h2>   <script src="@Url.Content("~/Scripts/jquery.validate.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")" type="text/javascript"></script>   @using (Html.BeginForm()) {     @Html.ValidationSummary(true)     <fieldset>         <legend>Category</legend>                @Html.EditorFor(model => model)               <p>             <input type="submit" value="Create" />         </p>     </fieldset> }   <div>     @Html.ActionLink("Back to List", "Index") </div> ViewStart file In Razor views, we can add a file named _viewstart.cshtml in the views directory  and this will be shared among the all views with in the Views directory. The below code in the _viewstart.cshtml, sets the Layout page for every Views in the Views folder.      @{     Layout = "~/Views/Shared/_Layout.cshtml"; }   Source Code You can download the source code from http://efmvc.codeplex.com/ . The source will be refactored on over time.   Summary In this post, we have created a simple web application using ASP.NET MVC 3 and EF Code First. We have discussed on technologies and practices such as ASP.NET MVC 3, Razor, EF Code First, Unity 2, generic Repository and Unit of Work. In my later posts, I will modify the application and will be discussed on more things. Stay tuned to my blog  for more posts on step by step application building.

    Read the article

  • An XEvent a Day (17 of 31) – A Look at Backup Internals and How to Track Backup and Restore Throughput (Part 1)

    - by Jonathan Kehayias
    Today’s post is a continuation of yesterday’s post How Many Checkpoints are Issued During a Full Backup? and the investigation of Database Engine Internals with Extended Events.  In today’s post we’ll look at how Backup’s work inside of SQL Server and how to track the throughput of Backup and Restore operations.  This post is not going to cover Backups in SQL Server as a topic; if that is what you are looking for see Paul Randal’s TechNet Article Understanding SQL Server Backups . Yesterday...(read more)

    Read the article

  • JMX Based Monitoring - Part Four - Business App Server Monitoring

    - by Anthony Shorten
    In the last blog entry I talked about the Oracle Utilities Application Framework V4 feature for monitoring and managing aspects of the Web Application Server using JMX. In this blog entry I am going to discuss a similar new feature that allows JMX to be used for management and monitoring the Oracle Utilities business application server component. This feature is primarily focussed on performance tracking of the product. In first release of Oracle Utilities Customer Care And Billing (V1.x I am talking about), we used to use Oracle Tuxedo as part of the architecture. In Oracle Utilities Application Framework V2.0 and above, we removed Tuxedo from the architecture. One of the features that some customers used within Tuxedo was the performance tracking ability. The idea was that you enabled performance logging on the individual Tuxedo servers and then used a utility named txrpt to produce a performance report. This report would list every service called, the number of times it was called and the average response time. When I worked a performance consultant, I used this report to identify badly performing services and also gauge the overall performance characteristics of a site. When Tuxedo was removed from the architecture this information was also lost. While you can get some information from access.log and some Mbeans supplied by the Web Application Server it was not at the same granularity as txrpt or as useful. I am happy to say we have not only reintroduced this facility in Oracle Utilities Application Framework but it is now accessible via JMX and also we have added more detail into the performance tracking. Most of this new design was working with customers around the world to make sure we introduced a new feature that not only satisfied their performance tracking needs but allowed for finer grained performance analysis. As with the Web Application Server, the Business Application Server JMX monitoring is enabled by specifying a JMX port number in RMI Port number for JMX Business and initial credentials in the JMX Enablement System User ID and JMX Enablement System Password configuration options. These options are available using the configureEnv[.sh] -a utility. These credentials are shared across the Web Application Server and Business Application Server for authorization purposes. Once this is information is supplied a number of configuration files are built (by the initialSetup[.sh] utility) to configure the facility: spl.properties - contains the JMX URL, the security configuration and the mbeans that are enabled. For example, on my demonstration machine: spl.runtime.management.rmi.port=6750 spl.runtime.management.connector.url.default=service:jmx:rmi:///jndi/rmi://localhost:6750/oracle/ouaf/ejbAppConnector jmx.remote.x.password.file=scripts/ouaf.jmx.password.file jmx.remote.x.access.file=scripts/ouaf.jmx.access.file ouaf.jmx.com.splwg.ejb.service.management.PerformanceStatistics=enabled ouaf.jmx.* files - contain the userid and password. The default configuration uses the JMX default configuration. You can use additional security features by altering the spl.properties file manually or using a custom template. For more security options see JMX Security for more details. Once it has been configured and the changes reflected in the product using the initialSetup[.sh] utility the JMX facility can be used. For illustrative purposes I will use jconsole but any JSR160 complaint browser or client can be used (with the appropriate configuration). Once you start jconsole (ensure that splenviron[.sh] is executed prior to execution to set the environment variables or for remote connection, ensure java is in your path and jconsole.jar in your classpath) you specify the URL in the spl.runtime.management.connnector.url.default entry. For example: You are then able to track performance of the product using the PerformanceStatistics Mbean. The attributes of the PerformanceStatistics Mbean are counts of each object type. This is where this facility differs from txrpt. The information that is collected includes the following: The Service Type is captured so you can filter the results in terms of the type of service. For maintenance type services you can even see the transaction type (ADD, CHANGE etc) so you can see the performance of updates against read transactions. The Minimum and Maximum are also collected to give you an idea of the spread of performance. The last call is recorded. The date, time and user of the last call are recorded to give you an idea of the timeliness of the data. The Mbean maintains a set of counters per Service Type to give you a summary of the types of transactions being executed. This gives you an overall picture of the types of transactions and volumes at your site. There are a number of interesting operations that can also be performed: reset - This resets the statistics back to zero. This is an important operation. For example, txrpt is restricted to collecting statistics per hour, which is ok for most people. But what if you wanted to be more granular? This operation allows to set the collection period to anything you wish. The statistics collected will represent values since the last restart or last reset. completeExecutionDump - This is the operation that produces a CSV in memory to allow extraction of the data. All the statistics are extracted (see the Server Administration Guide for a full list). This can be then loaded into a database, a tool or simply into your favourite spreadsheet for analysis. Here is an extract of an execution dump from my demonstration environment to give you an idea of the format: ServiceName, ServiceType, MinTime, MaxTime, Avg Time, # of Calls, Latest Time, Latest Date, Latest User ... CFLZLOUL, EXECUTE_LIST, 15.0, 64.0, 22.2, 10, 16.0, 2009-12-16::11-25-36-932, ASHORTEN CILBBLLP, READ, 106.0, 1184.0, 466.3333333333333, 6, 106.0, 2009-12-16::11-39-01-645, BOBAMA CILBBLLP, DELETE, 70.0, 146.0, 108.0, 2, 70.0, 2009-12-15::12-53-58-280, BPAYS CILBBLLP, ADD, 860.0, 4903.0, 2243.5, 8, 860.0, 2009-12-16::17-54-23-862, LELLISON CILBBLLP, CHANGE, 112.0, 3410.0, 815.1666666666666, 12, 112.0, 2009-12-16::11-40-01-103, ASHORTEN CILBCBAL, EXECUTE_LIST, 8.0, 84.0, 26.0, 22, 23.0, 2009-12-16::17-54-01-643, LJACKMAN InitializeUserInfoService, READ_SYSTEM, 49.0, 962.0, 70.83777777777777, 450, 63.0, 2010-02-25::11-21-21-667, ASHORTEN InitializeUserService, READ_SYSTEM, 130.0, 2835.0, 234.85777777777778, 450, 216.0, 2010-02-25::11-21-21-446, ASHORTEN MenuLoginService, READ_SYSTEM, 530.0, 1186.0, 703.3333333333334, 9, 530.0, 2009-12-16::16-39-31-172, ASHORTEN NavigationOptionDescriptionService, READ_SYSTEM, 2.0, 7.0, 4.0, 8, 2.0, 2009-12-21::09-46-46-892, ASHORTEN ... There are other operations and attributes available. Refer to the Server Administration Guide provided with your product to understand the full et of operations and attributes. This is one of the many features I am proud that we implemented as it allows flexible monitoring of the performance of the product.

    Read the article

  • JD Edwards World Reporting Made Easy with Real Time Reporting Tools from The GL Company

    Fred talks to Paul Yarwood, US Operations General Manager and Richard Crotty, North America Business Development Manager for The GL Company, an Oracle Certified Partner, and Denise Grills, Senior Director of Marketing and Product Strategy for Oracle's JD Edwards World products. They discuss how the finance department of JD Edwards World customers can have complete control over their management reporting with a true inquiry, consolidation, and reporting solution from The GL Company, freeing up the finance team from being dependent upon IT time and resources.

    Read the article

  • The SQL Server Reporting Services SDK for PHP Debuts

    - by The Official Microsoft IIS Site
    Microsoft has just released the SQL Server Reporting Services SDK for PHP, which enables PHP developers to easily create reports and integrate them in their web applications. The SDK offers a simple Application Programming Interface to interoperate with SQL Server Reporting Services, Microsoft's Reporting and Business Intelligence solution. Developers will be able to use the SDK to perform common operations like listing reports in PHP applications, providing custom report parameters from a PHP...(read more)

    Read the article

  • Oracle Unveils Oracle’s Primavera Inspire for SAP 8.0

    - by Sylvie MacKenzie, PMP
    “To successfully manage large capital projects and maintenance operations, organizations need clear visibility into materials, resources, schedule and financial information,” said Yasser Mahmud, vice president, Oracle’s Primavera Global Business Unit. “With the enhancements delivered in Oracle’s Primavera Inspire for SAP 8.0, partners and customers can benefit from an integrated solution that not only reduces risk, but also helps ensure that projects are completed on-time and within budget. This combination simplifies management and extends customers’ investments in existing SAP project management modules.” Read all about the new release at http://www.oracle.com/us/corporate/press/1666350?sc=OPR-TW

    Read the article

  • LLBLGen Pro v3.1 released!

    - by FransBouma
    Yesterday we released LLBLGen Pro v3.1! Version 3.1 comes with new features and enhancements, which I'll describe briefly below. v3.1 is a free upgrade for v3.x licensees. What's new / changed? Designer Extensible Import system. An extensible import system has been added to the designer to import project data from external sources. Importers are plug-ins which import project meta-data (like entity definitions, mappings and relational model data) from an external source into the loaded project. In v3.1, an importer plug-in for importing project elements from existing LLBLGen Pro v3.x project files has been included. You can use this importer to create source projects from which you import parts of models to build your actual project with. Model-only relationships. In v3.1, relationships of the type 1:1, m:1 and 1:n can be marked as model-only. A model-only relationship isn't required to have a backing foreign key constraint in the relational model data. They're ideal for projects which have to work with relational databases where changes can't always be made or some relationships can't be added to (e.g. the ones which are important for the entity model, but are not allowed to be added to the relational model for some reason). Custom field ordering. Although fields in an entity definition don't really have an ordering, it can be important for some situations to have the entity fields in a given order, e.g. when you use compound primary keys. Field ordering can be defined using a pop-up dialog which can be opened through various ways, e.g. inside the project explorer, model view and entity editor. It can also be set automatically during refreshes based on new settings. Command line relational model data refresher tool, CliRefresher.exe. The command line refresh tool shipped with v2.6 is now available for v3.1 as well Navigation enhancements in various designer elements. It's now easier to find elements like entities, typed views etc. in the project explorer from editors, to navigate to related entities in the project explorer by right clicking a relationship, navigate to the super-type in the project explorer when right-clicking an entity and navigate to the sub-type in the project explorer when right-clicking a sub-type node in the project explorer. Minor visual enhancements / tweaks LLBLGen Pro Runtime Framework Entity creation is now up to 30% faster and takes 5% less memory. Creating an entity object has been optimized further by tweaks inside the framework to make instantiating an entity object up to 30% faster. It now also takes up to 5% less memory than in v3.0 Prefetch Path node merging is now up to 20-25% faster. Setting entity references required the creation of a new relationship object. As this relationship object is always used internally it could be cached (as it's used for syncing only). This increases performance by 20-25% in the merging functionality. Entity fetches are now up to 20% faster. A large number of tweaks have been applied to make entity fetches up to 20% faster than in v3.0. Full WCF RIA support. It's now possible to use your LLBLGen Pro runtime framework powered domain layer in a WCF RIA application using the VS.NET tools for WCF RIA services. WCF RIA services is a Microsoft technology for .NET 4 and typically used within silverlight applications. SQL Server DQE compatibility level is now per instance. (Usable in Adapter). It's now possible to set the compatibility level of the SQL Server Dynamic Query Engine (DQE) per instance of the DQE instead of the global setting it was before. The global setting is still available and is used as the default value for the compatibility level per-instance. You can use this to switch between CE Desktop and normal SQL Server compatibility per DataAccessAdapter instance. Support for COUNT_BIG aggregate function (SQL Server specific). The aggregate function COUNT_BIG has been added to the list of available aggregate functions to be used in the framework. Minor changes / tweaks I'm especially pleased with the import system, as that makes working with entity models a lot easier. The import system lets you import from another LLBLGen Pro v3 project any entity definition, mapping and / or meta-data like table definitions. This way you can build repository projects where you store model fragments, e.g. the building blocks for a customer-order system, a user credential model etc., any model you can think of. In most projects, you'll recognize that some parts of your new model look familiar. In these cases it would have been easier if you would have been able to import these parts from projects you had pre-created. With LLBLGen Pro v3.1 you can. For example, say you have an Oracle schema called CRM which contains the bread 'n' butter customer-order-product kind of model. You create an entity model from that schema and save it in a project file. Now you start working on another project for another customer and you have to use SQL Server. You also start using model-first development, so develop the entity model from scratch as there's no existing database. As this customer also requires some CRM like entity model, you import the entities from your saved Oracle project into this new SQL Server targeting project. Because you don't work with Oracle this time, you don't import the relational meta-data, just the entities, their relationships and possibly their inheritance hierarchies, if any. As they're now entities in your project you can change them a bit to match the new customer's requirements. This can save you a lot of time, because you can re-use pre-fab model fragments for new projects. In the example above there are no tables yet (as you work model first) so using the forward mapping capabilities of LLBLGen Pro v3 creates the tables, PK constraints, Unique Constraints and FK constraints for you. This way you can build a nice repository of model fragments which you can re-use in new projects.

    Read the article

  • Base de Datos Oracle, su mejor opción para reducir costos de IT

    - by Ivan Hassig
    Por Victoria Cadavid Sr. Sales Cosultant Oracle Direct Uno de los principales desafíos en la administración de centros de datos es la reducción de costos de operación. A medida que las compañías crecen y los proveedores de tecnología ofrecen soluciones cada vez más robustas, conservar el equilibrio entre desempeño, soporte al negocio y gestión del Costo Total de Propiedad es un desafío cada vez mayor para los Gerentes de Tecnología y para los Administradores de Centros de Datos. Las estrategias más comunes para conseguir reducción en los costos de administración de Centros de Datos y en la gestión de Tecnología de una organización en general, se enfocan en la mejora del desempeño de las aplicaciones, reducción del costo de administración y adquisición de hardware, reducción de los costos de almacenamiento, aumento de la productividad en la administración de las Bases de Datos y mejora en la atención de requerimientos y prestación de servicios de mesa de ayuda, sin embargo, las estrategias de reducción de costos deben contemplar también la reducción de costos asociados a pérdida y robo de información, cumplimiento regulatorio, generación de valor y continuidad del negocio, que comúnmente se conciben como iniciativas aisladas que no siempre se adelantan con el ánimo de apoyar la reducción de costos. Una iniciativa integral de reducción de costos de TI, debe contemplar cada uno de los factores que  generan costo y pueden ser optimizados. En este artículo queremos abordar la reducción de costos de tecnología a partir de la adopción del que según los expertos es el motor de Base de Datos # del mercado.Durante años, la base de datos Oracle ha sido reconocida por su velocidad, confiabilidad, seguridad y capacidad para soportar cargas de datos tanto de aplicaciones altamente transaccionales, como de Bodegas de datos e incluso análisis de Big Data , ofreciendo alto desempeño y facilidades de administración, sin embrago, cuando pensamos en proyectos de reducción de costos de IT, además de la capacidad para soportar aplicaciones (incluso aplicaciones altamente transaccionales) con alto desempeño, pensamos en procesos de automatización, optimización de recursos, consolidación, virtualización e incluso alternativas más cómodas de licenciamiento. La Base de Datos Oracle está diseñada para proveer todas las capacidades que un área de tecnología necesita para reducir costos, adaptándose a los diferentes escenarios de negocio y a las capacidades y características de cada organización.Es así, como además del motor de Base de Datos, Oracle ofrece una serie de soluciones para optimizar la administración de la información a través de mecanismos de optimización del uso del storage, continuidad del Negocio, consolidación de infraestructura, seguridad y administración automática, que propenden por un mejor uso de los recursos de tecnología, ofrecen opciones avanzadas de configuración y direccionan la reducción de los tiempos de las tareas operativas más comunes. Una de las opciones de la base de datos que se pueden provechar para reducir costos de hardware es Oracle Real Application Clusters. Esta solución de clustering permite que varios servidores (incluso servidores de bajo costo) trabajen en conjunto para soportar Grids o Nubes Privadas de Bases de Datos, proporcionando los beneficios de la consolidación de infraestructura, los esquemas de alta disponibilidad, rápido desempeño y escalabilidad por demanda, haciendo que el aprovisionamiento, el mantenimiento de las bases de datos y la adición de nuevos nodos se lleve e cabo de una forma más rápida y con menos riesgo, además de apalancar las inversiones en servidores de menor costo. Otra de las soluciones que promueven la reducción de costos de Tecnología es Oracle In-Memory Database Cache que permite almacenar y procesar datos en la memoria de las aplicaciones, permitiendo el máximo aprovechamiento de los recursos de procesamiento de la capa media, lo que cobra mucho valor en escenarios de alta transaccionalidad. De este modo se saca el mayor provecho de los recursos de procesamiento evitando crecimiento innecesario en recursos de hardware. Otra de las formas de evitar inversiones innecesarias en hardware, aprovechando los recursos existentes, incluso en escenarios de alto crecimiento de los volúmenes de información es la compresión de los datos. Oracle Advanced Compression permite comprimir hasta 4 veces los diferentes tipos de datos, mejorando la capacidad de almacenamiento, sin comprometer el desempeño de las aplicaciones. Desde el lado del almacenamiento también se pueden conseguir reducciones importantes de los costos de IT. En este escenario, la tecnología propia de la base de Datos Oracle ofrece capacidades de Administración Automática del Almacenamiento que no solo permiten una distribución óptima de los datos en los discos físicos para garantizar el máximo desempeño, sino que facilitan el aprovisionamiento y la remoción de discos defectuosos y ofrecen balanceo y mirroring, garantizando el uso máximo de cada uno de los dispositivos y la disponibilidad de los datos. Otra de las soluciones que facilitan la administración del almacenamiento es Oracle Partitioning, una opción de la Base de Datos que permite dividir grandes tablas en estructuras más pequeñas. Esta aproximación facilita la administración del ciclo de vida de la información y permite por ejemplo, separar los datos históricos (que generalmente se convierten en información de solo lectura y no tienen un alto volumen de consulta) y enviarlos a un almacenamiento de bajo costos, conservando la data activa en dispositivos de almacenamiento más ágiles. Adicionalmente, Oracle Partitioning facilita la administración de las bases de datos que tienen un gran volumen de registros y mejora el desempeño de la base de datos gracias a la posibilidad de optimizar las consultas haciendo uso únicamente de las particiones relevantes de una tabla o índice en el proceso de búsqueda. Otros factores adicionales, que pueden generar costos innecesarios a los departamentos de Tecnología son: La pérdida, corrupción o robo de datos y la falta de disponibilidad de las aplicaciones para dar soporte al negocio. Para evitar este tipo de situaciones que pueden acarrear multas y pérdida de negocios y de dinero, Oracle ofrece soluciones que permiten proteger y auditar la base de datos, recuperar la información en caso de corrupción o ejecución de acciones que comprometan la integridad de la información y soluciones que permitan garantizar que la información de las aplicaciones tenga una disponibilidad de 7x24. Ya hablamos de los beneficios de Oracle RAC, para facilitar los procesos de Consolidación y mejorar el desempeño de las aplicaciones, sin embrago esta solución, es sumamente útil en escenarios dónde las organizaciones de quieren garantizar una alta disponibilidad de la información, ante fallo de los servidores o en eventos de desconexión planeada para realizar labores de mantenimiento. Además de Oracle RAC, existen soluciones como Oracle Data Guard y Active Data Guard que permiten replicar de forma automática las bases de datos hacia un centro de datos de contingencia, permitiendo una recuperación inmediata ante eventos que deshabiliten por completo un centro de datos. Además de lo anterior, Active Data Guard, permite aprovechar la base de datos de contingencia para realizar labores de consulta, mejorando el desempeño de las aplicaciones. Desde el punto de vista de mejora en la seguridad, Oracle cuenta con soluciones como Advanced security que permite encriptar los datos y los canales a través de los cueles se comparte la información, Total Recall, que permite visualizar los cambios realizados a la base de datos en un momento determinado del tiempo, para evitar pérdida y corrupción de datos, Database Vault que permite restringir el acceso de los usuarios privilegiados a información confidencial, Audit Vault, que permite verificar quién hizo qué y cuándo dentro de las bases de datos de una organización y Oracle Data Masking que permite enmascarar los datos para garantizar la protección de la información sensible y el cumplimiento de las políticas y normas relacionadas con protección de información confidencial, por ejemplo, mientras las aplicaciones pasan del ambiente de desarrollo al ambiente de producción. Como mencionamos en un comienzo, las iniciativas de reducción de costos de tecnología deben apalancarse en estrategias que contemplen los diferentes factores que puedan generar sobre costos, los factores de riesgo que puedan acarrear costos no previsto, el aprovechamiento de los recursos actuales, para evitar inversiones innecesarias y los factores de optimización que permitan el máximo aprovechamiento de las inversiones actuales. Como vimos, todas estas iniciativas pueden ser abordadas haciendo uso de la tecnología de Oracle a nivel de Base de Datos, lo más importante es detectar los puntos críticos a nivel de riesgo, diagnosticar las proporción en que están siendo aprovechados los recursos actuales y definir las prioridades de la organización y del área de IT, para así dar inicio a todas aquellas iniciativas que de forma gradual, van a evitar sobrecostos e inversiones innecesarias, proporcionando un mayor apoyo al negocio y un impacto significativo en la productividad de la organización. Más información http://www.oracle.com/lad/products/database/index.html?ssSourceSiteId=otnes 1Fuente: Market Share: All Software Markets, Worldwide 2011 by Colleen Graham, Joanne Correia, David Coyle, Fabrizio Biscotti, Matthew Cheung, Ruggero Contu, Yanna Dharmasthira, Tom Eid, Chad Eschinger, Bianca Granetto, Hai Hong Swinehart, Sharon Mertz, Chris Pang, Asheesh Raina, Dan Sommer, Bhavish Sood, Marianne D'Aquila, Laurie Wurster and Jie Zhang. - March 29, 2012 2Big Data: Información recopilada desde fuentes no tradicionales como blogs, redes sociales, email, sensores, fotografías, grabaciones en video, etc. que normalmente se encuentran de forma no estructurada y en un gran volumen

    Read the article

  • New DMV… not yet

    - by Michael Zilberstein
    Downloaded and installed new toy: And while reading BOL, stumbled upon new extremely useful DMV: sys.dm_exec_query_profiles . This DMV enables DBA to monitor query progress while it is being executed. Counters in the DMV are per operation per thread. So we’ll be able to monitor in real time which thread (even for parallel processing) processes which node in the plan. Or find heavy operations “post mortem”. We all know the uncomfortable feeling when some heavy query runs and the boss starts asking...(read more)

    Read the article

< Previous Page | 656 657 658 659 660 661 662 663 664 665 666 667  | Next Page >