Search Results

Search found 59118 results on 2365 pages for 'data persistence'.

Page 760/2365 | < Previous Page | 756 757 758 759 760 761 762 763 764 765 766 767  | Next Page >

  • A generic Re-usable C# Property Parser utility [on hold]

    - by Shyam K Pananghat
    This is about a utility i have happened to write which can parse through the properties of a data contracts at runtime using reflection. The input required is a look like XPath string. since this is using reflection, you dont have to add the reference to any of your data contracts thus making pure generic and re- usable.. you can read about this and get the full c# sourcecode here. Property-Parser-A-C-utility-to-retrieve-values-from-any-Net-Data-contracts-at-runtime Now about the doubts which i have about this utility. i am using this utility enormously i many places of my code I am using Regex repeatedly inside a recursion method. does this affect the memmory usage or GC collection badly ?do i have to dispose this manually. if yes how ?. The statements like obj.GetType().GetProperty() and obj.GetType().GetField() returns .net "object" which makes difficult or imposible to introduce generics here. Does this cause to have any overheads like boxing ? on an overall, please suggest to make this utility performance efficient and more light weight on memmory

    Read the article

  • Minimum percentage of free physical memory that Linux require for optimal performance

    - by csoto
    Recently, we have been getting questions about this percentage of free physical memory that OS require for optimal performance, mainly applicable to physical compute nodes. Under normal conditions you may see that at the nodes without any application running the OS take (for example) between 24 and 25 GB of memory. The Linux system reports the free memory in a different way, and most of those 25gbs (of the example) are available for user processes. IE: Mem: 99191652k total, 23785732k used, 75405920k free, 173320k buffers The MOS Doc Id. 233753.1 - "Analyzing Data Provided by '/proc/meminfo'" - explains it (section 4 - "Final Remarks"): Free Memory and Used Memory Estimating the resource usage, especially the memory consumption of processes is by far more complicated than it looks like at a first glance. The philosophy is an unused resource is a wasted resource.The kernel therefore will use as much RAM as it can to cache information from your local and remote filesystems/disks. This builds up over time as reads and writes are done on the system trying to keep the data stored in RAM as relevant as possible to the processes that have been running on your system. If there is free RAM available, more caching will be performed and thus more memory 'consumed'. However this doesn't really count as resource usage, since this cached memory is available in case some other process needs it. The cache is reclaimed, not at the time of process exit (you might start up another process soon that needs the same data), but upon demand. That said, focusing more specifically on the percentage question, apart from this memory that OS takes, how much should be the minimum free memory that must be available every node so that they operate normally? The answer is: As a rule of thumb 80% memory utilization is a good threshold, anything bigger than that should be investigated and remedied.

    Read the article

  • GTK app: How do I create a working indicator with Qt/C++?

    - by hakermania
    I've tried in 2 forums, but I had no luck so far. So, I am using Qt IDE in order to build my application so as to participate to the Ubuntu Showdown contest. In my application, I've done the following: void show_app(MainWindow *data) { //this works fine: app_indicator_set_status(appindicator, APP_INDICATOR_STATUS_PASSIVE); //this crashes the application: data->show(); } void MainWindow::make_indicator() { if(appindicator){ //appindicator has already been created return; } appindicator = app_indicator_new("Format Junkie Indicator", "formatjunkie", APP_INDICATOR_CATEGORY_APPLICATION_STATUS); GtkWidget* showapp_option; GtkWidget* indicatormenu = gtk_menu_new(); GtkWidget* item = gtk_menu_item_new_with_label("Format Junkie main menu"); gtk_menu_item_set_submenu(GTK_MENU_ITEM(item), indicatormenu); showapp_option = gtk_menu_item_new_with_label("Show App!"); g_signal_connect(showapp_option, "activate", G_CALLBACK(show_app), this); gtk_menu_shell_append(GTK_MENU_SHELL(indicatormenu), showapp_option); gtk_widget_show_all(indicatormenu); app_indicator_set_status(appindicator, APP_INDICATOR_STATUS_ACTIVE); app_indicator_set_attention_icon(appindicator, "dialog-warning"); app_indicator_set_menu(appindicator, GTK_MENU (indicatormenu)); } So, basically I am trying to make a simple indicator entry, which, on click, it will hide the indicator and display the application. The indicator can be successfully hidden using the PASSIVE thingy over there, but, during the call data-show();, the application crashes. Any help on what I am doing wrong would be appreciated! Also, please help me to correct this problem I'm facing (alternatively, I will migrate to the old and good tray icon (it works fine in 12.04, anyway) which I can handle very easily and efficiently)

    Read the article

  • Implementing a scalable and high-performing web app

    - by Christopher McCann
    I have asked a few questions on here before about various things relating to this but this is more of a consolidation question as I would like to check I have got the gist of everything. I am in the middle of developing a social media web app and although I have a lot of experience coding in Java and in PHP I am trying things a bit different this time. I have modularised each component of the application. So for example one component of the application allows users to private message each other and I have split this off into its own private messaging service. I have also created a user data service the purpose of which is to return data about the user for example their name, address, age etc etc from the database. Their is also another service, the friends service, which will work off the neo4j database to create a social graph. My reason for doing all this is to allow me up to update seperate modules when I need to - so while they mostly all run off MySQL right now I could move one to Cassandra later if I thought it approriate. The actual code of the web app is really just used for the final construction. The modules behind it dont really follow any strict REST or SOAP protocol. Basically each method on our API is turned into a PHP procedural script. This then may make calls to other back-end code which tends to be OO. The web app makes CURL requests to these pages and POSTs data to them or GETs data from them. These pages then return JSON where data is required. I'm still a little mixed up about how I actually identify which user is logged in at that moment. Do I just use sessions for that? Like if we called the get-messages.php script which equates to the getMessages() method for that user - returning all the private messages for that user - how would the back-end code know which user it is as posting the users ID to the script would not be secure. Anyone could do that and get all the messages. So I thought I would use sessions for it. Am I correct on this? Can anyone spot any other problems with what I am doing here? Thanks

    Read the article

  • Recommended formats to store bitmaps in memory?

    - by Geotarget
    I'm working with general purpose image rendering, and high-performance image processing, and so I need to know how to store bitmaps in-memory. (24bpp/32bpp, compressed/raw, etc) I'm not working with 3D graphics or DirectX / OpenGL rendering and so I don't need to use graphics card compatible bitmap formats. My questions: What is the "usual" or "normal" way to store bitmaps in memory? (in C++ engines/projects?) How to store bitmaps for high-performance algorithms, such that read/write times are the fastest? (fixed array? with/without padding? 24-bpp or 32-bpp?) How to store bitmaps for applications handling a lot of bitmap data, to minimize memory usage? (JPEG? or a faster [de]compression algorithm?) Some possible methods: Use a fixed packed 24-bpp or 32-bpp int[] array and simply access pixels using pointer access, all pixels are allocated in one continuous memory chunk (could be 1-10 MB) Use a form of "sparse" data storage so each line of the bitmap is allocated separately, reusing more memory and requiring smaller contiguous memory segments Store bitmaps in its compressed form (PNG, JPG, GIF, etc) and unpack only when its needed, reducing the amount of memory used. Delete the unpacked data if its not used for 10 secs.

    Read the article

  • Java University Pre-Conference Training Is Back

    - by Tori Wieldt
    The Java University pre-conference training will be back again this year at JavaOne!  Java University gives conference attendees a chance to get even more from your conference experience by giving you the option to attend a full day of in-depth Java training delivered by the experts on the Sunday prior to the conference.  We are working to make this event even better in 2012 and want to find out which technical sessions you would like to see offered.  Web Services? Developing Rich Client Applications with Java SE 7? Developing Portable Java EE Applications with the Enterprise JavaBeans 3.1 API and Java Persistence API 2.0? Performance Tuning?  There are so many hot topics to choose from we need your help to decide.  Get a preview of the full list of sessions we are considering and tell us which ones pique your interest most in our short survey.  There are only four questions so it shouldn’t take you much time and it will help make for a better event in 2012.

    Read the article

  • .NET AES returns wrong Test Vectors

    - by ralu
    I need to implement some crypto protocol on C# and want to say that this is my first project in C#. After spending some time to get used on C# I found out that I am unable to get compliant AES vectors. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Security.Cryptography; using System.IO; namespace ConsoleApplication1 { class Program { public static void Main() { try { //test vectors from "ecb_vk.txt" byte[] key = { 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }; byte[] data = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }; byte[] encTest = { 0x0e, 0xdd, 0x33, 0xd3, 0xc6, 0x21, 0xe5, 0x46, 0x45, 0x5b, 0xd8, 0xba, 0x14, 0x18, 0xbe, 0xc8 }; AesManaged aesAlg = new AesManaged(); aesAlg.BlockSize = 128; aesAlg.Key = key; aesAlg.Mode = CipherMode.ECB; ICryptoTransform encryptor = aesAlg.CreateEncryptor(); MemoryStream msEncrypt = new MemoryStream(); CryptoStream csEncrypt = new CryptoStream(msEncrypt, encryptor, CryptoStreamMode.Write); StreamWriter swEncrypt = new StreamWriter(csEncrypt); swEncrypt.Write(data); swEncrypt.Close(); csEncrypt.Close(); msEncrypt.Close(); aesAlg.Clear(); byte[] encr; encr = msEncrypt.ToArray(); string datastr = BitConverter.ToString(data); string encrstr = BitConverter.ToString(encr); string encTestStr = BitConverter.ToString(encTest); Console.WriteLine("data: {0}", datastr); Console.WriteLine("encr: {0}", encrstr); Console.WriteLine("should: {0}", encTestStr); Console.ReadKey(); } catch (Exception e) { Console.WriteLine("Error: {0}", e.Message); } } } } Output is wrong: data: 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 encr: A0-3C-C2-22-A4-32-F7-C9-BA-36-AE-73-66-BD-BB-A3 should: 0E-DD-33-D3-C6-21-E5-46-45-5B-D8-BA-14-18-BE-C8 I am sure that there is a correct AES implementation in .NET, so I need some advice from a .NET wizard to help with this.

    Read the article

  • ORACLE PARTNER ARCHITECTS TRAINING

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;} Join the “Oracle Partner Architects Training”. It is aimed at providing Partner experts, architects and consultants with in-depth architectural knowledge about Oracle technology. Here is your chance to learn from the best. Oracle technology beyond the obvious. There are over 40 live and recorded online training sessions, covering many aspects of systems architecture, including these in the domain of BI, data-warehousing and integration: The Oracle Information Management Reference Architecture 3nf Or Data Vault: Mixed Emotions Or A Clear Choice? The Ins And Outs Of Data Integration Introduction To BI-Applications Customizations In BI-Applications Endeca: Analyzing Unstructured Information Download the full schedule in your language (Dutch, English, French, German, Italian, Spanish).

    Read the article

  • Paypal IPN: how get the POSTs from this class?

    - by sineverba
    I'm using this Class <?php class paypalIPN { //sandbox: private $paypal_url = 'https://www.sandbox.paypal.com/cgi-bin/webscr'; //live site: //private $paypal_url = 'https://www.paypal.com/cgi-bin/webscr'; private $data = null; public function __construct() { $this->data = new stdClass; } public function isa_dispute() { //is it some sort of dispute. return $this->data->txn_type == "new_case"; } public function validate() { // parse the paypal URL $response = ""; $url_parsed = parse_url($this->paypal_url); // generate the post string from the _POST vars aswell as load the // _POST vars into an arry so we can play with them from the calling // script. $post_string = ''; foreach ($_POST as $field=>$value) { $this->data->$field = $value; $post_string .= $field.'='.urlencode(stripslashes($value)).'&'; } $post_string.="cmd=_notify-validate"; // append ipn command $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $this->paypal_url); //curl_setopt($ch, CURLOPT_VERBOSE, 1); //keep the peer and server verification on, recommended //(can switch off if getting errors, turn to false) curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, FALSE); curl_setopt($ch, CURLOPT_RETURNTRANSFER,1); curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_POSTFIELDS, $post_string); $response = curl_exec($ch); if (curl_errno($ch)) { die("Curl Error: " . curl_errno($ch) . ": " . curl_error($ch)); } curl_close($ch); return $response; if (preg_match("/VERIFIED/", $response)) { // Valid IPN transaction. return $this->data; } else { return false; } } } ANd i recall in this mode: public function get_ipn() { $ipn = new paypalIPN(); $result = $ipn->validate(); $logger = new Log('/error.log'); $logger->write(print_r($result)); } But I obtain only "VERIFIED" or "1" (whitout or with the print_r function). I just tried also to return directly the raw curl response with return $response; or return $this->response; or also return $this->parse_string; but everytime I receive only "1" or "VERIFIED"....... Thank you very much

    Read the article

  • Synchronizing 3 servers over IP

    - by user93078
    I'm setting up a medical server for a hospital that has doctors located in 3 different locations, meaning there would be 3 servers (1 in each location). All 3 servers would just have the following software: Ubuntu Server 12.04 minimal MySQL, PHP 5, Apache The medical software which would read/write to the MySQL database Remote admin apps like Nagios & Webmin Rsync for backup (rsync-over-ssh) as a cron job and the doctors at each location would access patient & billing data from their respective servers. What I'd like is, that each of these servers all have synchronized info (especially the mySQL database's) - let's say on an hourly basis each of these servers synchronize data to a common remote server and the data is then brought down to each of the servers. I know an easier way would be to have the medical app running on a remote web server, but since this is medical that we're talking about and knowing how common it is in our area for the net to go gown, I wouldn't like a web based scenatio. Is such a setup possible? Would this be the right way to do things or is there a better way to this? Would really appreciate views and comments (or how to set this up) on this.

    Read the article

  • dynamically bound elements don't subscribe to CSS rules

    - by user961627
    I'm using ajax to dynamically populate a menu. The problem is the dynamically created elements do not follow the CSS rules, although they're created with the right classname. This is surprising. Here's my HTML: <div id='menu1'> <ul class='menu'> <?php $sql = /// /* some sql string, which works */ $result = mysql_query($sql); while($row = mysql_fetch_array( $result )) { $prod_id = $row['product_id']; $prod_name = $row['product_name']; echo "<li data-title='$prod_id' class='menu1_item'>".$prod_name."</li> "; } ?> </ul> </div> <div id='menu2'> <ul class='menu'> </ul> </div> <div id='menu3'> <ul class='menu'> </ul> </div> Here's my jquery: $(".menu1_item").click( function() { $.ajax({ type: "POST", data: { m: '2', id: $(this).attr('data-title') }, url: "fetch_designs.php", success: function(msg){ if (msg != ''){ $("#menu2").html(msg).show(); $(".menu2_item").bind('click',function() { $.ajax({ type: "POST", data: { m: '3', id: $(this).attr('data-title') }, url: "fetch_designs.php", success: function(msg){ if (msg != ''){ $("#menu3").html(msg).show(); } } }); //end menu2 click }); //end if } //end success } }); //end menu1 click }); My CSS is as follows: ul.menu li { cursor:pointer; list-style: none; } #menu1, #menu2, #menu3 { margin:50px; position:relative; display:block; float:left; } .menu1-item .menu2-item .menu3-item { padding:4px; background-color:lightgray; color:black; cursor:pointer; }

    Read the article

  • How do you plan your asynchronous code?

    - by NullOrEmpty
    I created a library that is a invoker for a web service somewhere else. The library exposes asynchronous methods, since web service calls are a good candidate for that matter. At the beginning everything was just fine, I had methods with easy to understand operations in a CRUD fashion, since the library is a kind of repository. But then business logic started to become complex, and some of the procedures involves the chaining of many of these asynchronous operations, sometimes with different paths depending on the result value, etc.. etc.. Suddenly, everything is very messy, to stop the execution in a break point it is not very helpful, to find out what is going on or where in the process timeline have you stopped become a pain... Development becomes less quick, less agile, and to catch those bugs that happens once in a 1000 times becomes a hell. From the technical point, a repository that exposes asynchronous methods looked like a good idea, because some persistence layers could have delays, and you can use the async approach to do the most of your hardware. But from the functional point of view, things became very complex, and considering those procedures where a dozen of different calls were needed... I don't know the real value of the improvement. After read about TPL for a while, it looked like a good idea for managing tasks, but in the moment you have to combine them and start to reuse existing functionality, things become very messy. I have had a good experience using it for very concrete scenarios, but bad experience using them broadly. How do you work asynchronously? Do you use it always? Or just for long running processes? Thanks.

    Read the article

  • public_html permissions for local development

    - by maGz
    I know this question has popped up a couple times, but I can't seem to find a definitive answer to my issue, so please bear with me. I have Ubuntu Server 12.04 setup in VirtualBox for PHP development and testing (Drupal plus other PHP sites using Yii framework). My question is in 3 parts... 1) If I create a public_html folder under /home/myuser, do I need to give ownership of that folder to the Apache www-data group? If so, are there any specific permissions I should be setting? 755? (Btw, I am following this guide to create the public_html directory and set up multiple virtual hosts per site I create and test) I previously had all of my sites under /var/www, but ran into massive permission denied errors whenever I tried to sFTP to it, either through FileZilla or PhpStorm. This is what I had previously done: sudo chgrp www-data /var/www sudo chmod -R 775 /var/www sudo chmod -R g+s /var/www sudo usermod -G www-data [my_ftp_user] 2) The second part of my question is this: If I create my PHP project and files in Windows through PhpStorm, and then upload via sFTP, will permissions get affected? 3) Once I am satisfied with my developed project, would it be advisable to move and test them under /var/www to see how it would fair in a production-ish environment? I would really appreciate the help and advice here. I'm learning more as I go along, but dealing with Linux files and permissions is a bit of a new ballgame for me! Thank you

    Read the article

  • Inserting checkbox values

    - by rabeea
    hey i have registration form that has checkboxes along with other fields. i cant insert the selected checkbox values into the data base. i have made one field in the database for storing all checked values. this is the code for checkbox part in the form: Websites, IT and Software Writing and Content <pre><input type="checkbox" name="expertise[]" value="Design and Media"> Design and Media <input type="checkbox" name="expertise[]" value="Data entry and Admin"> Data entry and Admin </pre> <pre><input type="checkbox" name="expertise[]" value="Engineering and Skills"> Engineering and Science <input type="checkbox" name="expertise[]" value="Seles and Marketing"> Sales and Marketing </pre> <pre><input type="checkbox" name="expertise[]" value="Business and Accounting"> Business and Accounting <input type="checkbox" name="expertise[]" value="Others"> Others </pre> and this is the corresponding php code for inserting data $checkusername=mysql_query("SELECT * FROM freelancer WHERE fusername='{$_POST['username']}'"); if (mysql_num_rows($checkusername)==1) { echo "username already exist"; } else { $query = "insert into freelancer(ffname,flname,fgender,femail,fusername,fpwd,fphone,fadd,facc,facc_name,fbank_details,fcity,fcountry,fexpertise,fprofile,fskills,fhourly_rate,fresume) values ('".$_POST['first_name']."','".$_POST['last_name']."','".$_POST['gender']."','".$_POST['email']."','".$_POST['username']."','".$_POST['password']."','".$_POST['phone']."','".$_POST['address']."','".$_POST['acc_num']."','".$_POST['acc_name']."','".$_POST['bank']."','".$_POST['city']."','".$_POST['country']."','".implode(',',$_POST['expertise'])."','".$_POST['profile']."','".$_POST['skills']."','".$_POST['rate']."','".$_POST['resume']."')"; $result = ($query) or die (mysql_error()); this code inserts data for all fields but the checkbox value field remains empty???

    Read the article

  • Need advice concerning Feature Based Development when knowledge DB is involved

    - by voroninp
    We develop BackOffice application which is used to edit our knowledge DB. Now our main product's development team is shifting to the feature based development and we need to support several DB's with not identical data schemes. (DS changes slightly from DB to DB) The information from knowledge Db is extracted by the script and then is distributed to the clients. We also need to support merging these DB's. We now analyze pros and cons of different approaches. We discuss this one: One working DB (WDB) with one DB for each feature branch (FDB). The approved data is moved from WDB to FDB. So we need to support only one script for each branch. This script will extract data from corresponding FDB. Nevertheless we are to code the differences between FDBs and WDB manually. May be some automatic mapping tools exist? I also wish to know whether classic solutions to the alike problems already exist. Can anyone share the best practices for this case?

    Read the article

  • Oracle Day 2012

    - by Mark Hesse
    Normal.dotm 0 0 1 133 760 Sun Microsystems 6 1 933 12.0 0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} As a keynote speaker at this year’s Oracle Day 2012, “Your Vision, Engineered” I had the honor and pleasure of speaking to a crowd of about 150 attendees about our recently released, fourth generation Exadata X3 In-Memory Machine in a presentation entitled “Oracle Exadata X3 - Transforming Data Management”. The general theme of the thirty-minute talk was how to improve performance, lower costs, and build the foundation for your cloud service platform using Exadata. Since its introduction in 2008, I’ve watched first-hand as Exadata has evolved from a data warehouse-only system to an OLTP and DW in-memory database machine capable of storing hundreds of terabytes of compressed user data in flash and main memory.  Many of my Exadata customers are now purchasing additional systems as they continue to standardize Oracle 11g deployments on the best database platform available.

    Read the article

  • JavaOne Technology Conference Is Coming to Russia

    - by Tori Wieldt
    JavaOne Russia 17-18 AprilRussian Academy of Sciences, MoscowRegister Now JavaOne and Oracle Develop 2012 Russia offers a wide variety of sessions, hands-on labs, keynotes, demos, and the opportunity to network with developer peers. If you’re looking for in-depth sessions on Java technologies and tools, this is the conference for you. Your registration also gets you into Oracle Develop sessions as well, so you can learn about application servers, cloud development and, of course, database development. The JavaOne Russia tracks are:Client-Side Technologies and Rich User ExperiencesLearn about developments in Java for the desktop and practices for building rich, immersive, and powerful user experiences across multiple hardware platforms and form factors. Core Java PlatformDiscover the latest innovations in Java virtual machines. Get deep technical explanations in security and networking and enhancements that allow dynamic programming languages to drive Java platform adoption. Java EE Web Profile, Platform Technologies, Web Services, and the Cloud Update your knowledge on topics such as Web application development, persistence, security, and transactions. This track will also address modularity, enterprise caching, Web sockets, and internet identity. Mobile, Java Card, Embedded, and DevicesThis track is devoted to Java technology as the ultimate platform for mobile computing. It also covers embedded and device usages of Java technologies, including Java SE, Java ME, Java Card, and JavaFX. Share this event: #oracleRU

    Read the article

  • Is a Multi-DAL Approach the way to go here?

    - by Krisc
    Working on the data access / model layer in this little MVC2 project and trying to think things out to future projects. I have a database with some basic tables and I have classes in the model layer that represent them. I obviously need something to connect the two. The easiest is to provide some sort of 'provider' that can run operations on the database and return objects. But this is for a website that would potentially be used "a lot" (I know, very general) so I want to cache results from the data layer and keep the cache updated as new data is generated. This question deals with how best to approach this problem of dual DALS. One that returns cached data when possible and goes to the data layer when there is a cache miss. But more importantly, how to integrate the core provider (thing that goes into database) with the caching layer so that it too can rely on cached objects rather than creating new ones. Right now I have the following interfaces: IDataProvider is used to reach the database. It doesn't concern itself with the meaning of the objects it produces, but simply the way to produce them. interface IDataProvider{ // Select, Update, Create, et cetera access IEnumerable<Entry> GetEntries(); Entry GetEntryById(int id); } IDataManager is a layer that sits on top of the IDataProvider layer and manages the cache interface IDataManager : IDataProvider{ void ClearCache(); } Note that in practice the IDataManager implementation will have useful helper functions to add objects to their related cache stores. (In the future I may define other functions on the interface) I guess what I am looking for is the best way to approach a loop back from the IDataProvider implementations so that they can access the cache. Or a different approach entirely may be in order? I am not very interested in 3rd party products at the moment as I am interested in the design of these things much more than this specific implementation. Edit: I realize the title may be a bit misleading. I apologize for that... not sure what to call this question.

    Read the article

  • What are some good tips for a developer trying to design a scalable MySQL database?

    - by CFL_Jeff
    As the question states, I am a developer, not a DBA. I have experience with designing good ER schemas and am fairly knowledgeable about normalization and good schema design. I have also worked with data warehouses that use dimensional modeling with fact tables and dim tables. However, all of the database-driven applications I've developed at previous jobs have been internal applications on the company's intranet, never receiving "real-world traffic". Furthermore, at previous jobs, I have always had a DBA or someone who knew much more than me about these things. At this new job I just started, I've been asked to develop a public-facing application with a MySQL backend and the data stored by this application is expected to grow very rapidly. Oh, and we don't have a DBA. Well, I guess I am the DBA. ;) As far as designing a database to be scalable, I don't even know where to start. Does anyone have any good tips or know of any good educational materials for a developer who has been sort of shoved into a DBA/database designer role and has been tasked with designing a scalable database to support an application like this? Have any other developers been through this sort of thing? What did you do to quickly become good at this role? I've found some good slides on the subject here but it's hard to glean details from slides. Wish I could've attended that guy's talk. I also found a good blog entry called 5 Ways to Boost MySQL Scalability which had some good information, though some of it was over my head. tl;dr I just want to make sure the database doesn't have to be completely redesigned when it scales up, and I'm looking for tips to get it right the first time. The answer I'm looking for is a "list of things every developer should know about making a scalable MySQL database so your application doesn't perform like crap when the data gets huge".

    Read the article

  • Using pkexec policy to run out of /opt/

    - by liberavia
    I still try to make it possible to run my app with root priveleges. Therefore I created two policies to run the application via pkexec (one for /usr/bin and one for /opt/extras... ) and added them to the setup.py: data_files=[('/usr/share/polkit-1/actions', ['data/com.ubuntu.pkexec.armorforge.policy']), ('/usr/share/polkit-1/actions', ['data/com.ubuntu.extras.pkexec.armorforge.policy']), ('/usr/bin/', ['data/armorforge-pkexec'])] ) additionally I added a startscript which uses pkexec for starting the application. It distinguishes between the two places and is used in the Exec-Statement of the desktopfile: #!/bin/sh if [ -f /opt/extras.ubuntu.com/armorforge/bin/armorforge ]; then pkexec "/opt/extras.ubuntu.com/armorforge/bin/armorforge" "$@" else pkexec `which armorforge` "$@" fi If I simply do a quickly package everything will work right. But if I package with extras option: quickly package --extras the Exec-statement will be exchanged. Even if I try to simulate the pkexec call via armorforge-pkexec It will aks for a password and then returns this: andre@andre-desktop:~/Entwicklung/Ubuntu/armorforge$ armorforge-pkexec (armorforge:10108): GLib-GIO-ERROR **: Settings schema 'org.gnome.desktop.interface' is not installed Trace/breakpoint trap (core dumped) So ok, I could not trick the opt-thing. How can I make sure, that my Application will run with root priveleges out of opt. I copied the way of using pkexec from synaptic. My application is for communicating with apparmor which currently has no dbus interface. Else I need to write into /etc/apparmor.d-folder. How should I deal with the opt-build which, as far as I understand, is required to submit my application to the ubuntu software center. Thanks for any hints and/or links :-)

    Read the article

  • what's a good approach to working with multiple databases?

    - by Riz
    I'm working on a project that has its own database call it InternalDb, but also it queries two other databases, call them ExternalDb1 and ExternalDb2. Both ExternalDb1 and ExternalDb2 are actually required by a few other projects. I'm wondering what the best approach for dealing with this is? Currently, I've just created a project for each of these external databases and then generated Edmx and entities using the entity-framework approach. My thought was that I could then include these projects in any of my solutions that require access to these databases. Also, I don't have any separate business layers. I just have a solution like below: Project.Domain ExternalDb1Project.Domain ExternalDb2Project.Domain Project.Web So my Domain projects contain the data access as well as the POCOs generated by Entity Framework and any business logic. But I'm not sure if this is a good approach. For example if I want to do Validation in my Project.Domain on the entities in the InternalDb, it's fine. But if I want to do Validation for entities from either of the ExternalDbs, then I wonder where it should go? To be more specific, I retrieve Employees from ExternalDb1Project.Domain. However, I want to make sure they are Active. Where should this Validation go? How to architect a project like this at a high level? Also, I want to make sure that I use IoC for my data contexts so I can create Fakes when writing tests. I wonder where the interfaces for these various data contexts would reside?

    Read the article

  • Emulate Historical Figures i.e. Einstein - Is this possible using linguistic logic for my http://www.ustimeline.com Education System

    - by Johnnylight
    After hearing about the success of IBM's Watson I started thinking perhaps emulating human language is now possible? My goal is to create Virtual Historical characters to represent the main characters in my Adventur-Cation The Great American Adventure program such as Einstein or Crazy Horse. The goal is to build an intelligent system capable of indexing the internet and storing the data using a schema using modern knowledge on linguistic theory (phonemes, morphemes, syntax) to build a system capable to returning a semantically sound response very similar to the response made by the same person if still alive today. The goal would be to use the same engine/system for all characters. Each characters would have their own digital representation and voice, and would organize data differently based on tags/keywords stored about the individual. Imagine a Max Headroom Einstein. Based on the success of Watson, I believe something like this may now be possible. Would be an interesting way to study history and would be a vehicle of entertainment as well. Can anyone confirm if this has already been attempted? Is anyone interested in exploring this using Cognitive Science, Psychology, Artificial Intelligence, Historical data captured on the internet, and Linguistic theory?

    Read the article

  • .NET datetime issue with SQL stored procedure

    - by DanO
    I am getting the below error when executing my application on a Windows XP machine with .NET 2.0 installed. On my computer Windows 7 .NET 2.0 - 3.5 I am not having any issues. The target SQL server version is 2005. This error started occurring when I added the datetime to the stored procedure. I have been reading alot about using .NET datetime with SQL datetime and I still have not figured this out. If someone can point me in the right direction I would appreciate it. Here is the where I believe the error is coming from. private static void InsertRecon(string computerName, int EncryptState, TimeSpan FindTime, Int64 EncryptSize, DateTime timeWritten) { SqlConnection DBC = new SqlConnection("server=server;UID=InventoryServer;Password=pass;database=Inventory;connection timeout=30"); SqlCommand CMD = new SqlCommand(); try { CMD.Connection = DBC; CMD.CommandType = CommandType.StoredProcedure; CMD.CommandText = "InsertReconData"; CMD.Parameters.Add("@CNAME", SqlDbType.NVarChar); CMD.Parameters.Add("@ENCRYPTEXIST", SqlDbType.Int); CMD.Parameters.Add("@RUNTIME", SqlDbType.Time); CMD.Parameters.Add("@ENCRYPTSIZE", SqlDbType.BigInt); CMD.Parameters.Add("@TIMEWRITTEN", SqlDbType.DateTime); CMD.Parameters["@CNAME"].Value = computerName; CMD.Parameters["@ENCRYPTEXIST"].Value = EncryptState; CMD.Parameters["@RUNTIME"].Value = FindTime; CMD.Parameters["@ENCRYPTSIZE"].Value = EncryptSize; CMD.Parameters["@TIMEWRITTEN"].Value = timeWritten; DBC.Open(); CMD.ExecuteNonQuery(); } catch (System.Data.SqlClient.SqlException e) { PostMessage(e.Message); } finally { DBC.Close(); CMD.Dispose(); DBC.Dispose(); } } Unhandled Exception: System.ArgumentOutOfRangeException: The SqlDbType enumeration value, 32, is invalid. Parameter name: SqlDbType at System.Data.SqlClient.MetaType.GetMetaTypeFromSqlDbType(SqlDbType target) at System.Data.SqlClient.SqlParameter.set_SqlDbType(SqlDbType value) at System.Data.SqlClient.SqlParameter..ctor(String parameterName, SqlDbType dbType) at System.Data.SqlClient.SqlParameterCollection.Add(String parameterName, SqlDbType sqlDbType) at ReconHelper.getFilesInfo.InsertRecon(String computerName, Int32 EncryptState, TimeSpan FindTime, Int64 EncryptSize, DateTime timeWritten) at ReconHelper.getFilesInfo.Main(String[] args)

    Read the article

  • Dynamic Dispatch without Virtual Functions

    - by Kristopher Johnson
    I've got some legacy code that, instead of virtual functions, uses a kind field to do dynamic dispatch. It looks something like this: // Base struct shared by all subtypes // Plain-old data; can't use virtual functions struct POD { int kind; int GetFoo(); int GetBar(); int GetBaz(); int GetXyzzy(); }; enum Kind { Kind_Derived1, Kind_Derived2, Kind_Derived3 }; struct Derived1: POD { Derived1(): kind(Kind_Derived1) {} int GetFoo(); int GetBar(); int GetBaz(); int GetXyzzy(); // plus other type-specific data and function members }; struct Derived2: POD { Derived2(): kind(Kind_Derived2) {} int GetFoo(); int GetBar(); int GetBaz(); int GetXyzzy(); // plus other type-specific data and function members }; struct Derived3: POD { Derived3(): kind(Kind_Derived3) {} int GetFoo(); int GetBar(); int GetBaz(); int GetXyzzy(); // plus other type-specific data and function members }; and then the POD class's function members are implemented like this: int POD::GetFoo() { // Call kind-specific function switch (kind) { case Kind_Derived1: { Derived1 *pDerived1 = static_cast<Derived1*>(this); return pDerived1->GetFoo(); } case Kind_Derived2: { Derived2 *pDerived2 = static_cast<Derived2*>(this); return pDerived2->GetFoo(); } case Kind_Derived3: { Derived3 *pDerived3 = static_cast<Derived3*>(this); return pDerived3->GetFoo(); } default: throw UnknownKindException(kind, "GetFoo"); } } POD::GetBar(), POD::GetBaz(), POD::GetXyzzy(), and other members are implemented similarly. This example is simplified. The actual code has about a dozen different subtypes of POD, and a couple dozen methods. New subtypes of POD and new methods are added pretty frequently, and so every time we do that, we have to update all these switch statements. The typical way to handle this would be to declare the function members virtual in the POD class, but we can't do that because the objects reside in shared memory. There is a lot of code that depends on these structs being plain-old-data, so even if I could figure out some way to have virtual functions in shared-memory objects, I wouldn't want to do that. So, I'm looking for suggestions as to the best way to clean this up so that all the knowledge of how to call the subtype methods is centralized in one place, rather than scattered among a couple dozen switch statements in a couple dozen functions. What occurs to me is that I can create some sort of adapter class that wraps a POD and uses templates to minimize the redundancy. But before I start down that path, I'd like to know how others have dealt with this.

    Read the article

  • Proper library for enums

    - by Bobson
    I'm trying to refactor some code such that the display is separate from the implementation, and I'm not sure where to put the existing enums. My project is currently structured as follows: Utilities RemoteData (Depends on: Utilities) LocalData (Depends on: RemoteData, Utilities) RemoteWeb (Depends on: RemoteData, Utilities) LocalWeb (Depends on: RemoteData, LocalData, Utilities) I'm now trying to add "ViewLibrary (Depends on: Utilities)" to this list, and then adding it as a new dependency to both RemoteWeb and LocalWeb. It will contain a set of interfaces which the other two projects will implement, use to populate the view, and then consume the result. There's an enum which is currently used in all the projects except Utilities. It thus lives in the RemoteData project, because everything else depends on it. But this new ViewLibrary won't depend on either data project. So how will it know about this enum? Some options I see: Create a new project just for shared enum values. Add it to Utilities, even though it is related to data. Define it a second time in ViewLibrary, and require both RemoteWeb and LocalWeb to convert the one type into the other when they access the shared views. Add a dependency on RemoteData to the ViewLibrary, even though it's supposed to be independent of data-source. Are there any better options? Is this structure flawed to begin with?

    Read the article

< Previous Page | 756 757 758 759 760 761 762 763 764 765 766 767  | Next Page >