Search Results

Search found 33021 results on 1321 pages for 'database sessions'.

Page 613/1321 | < Previous Page | 609 610 611 612 613 614 615 616 617 618 619 620  | Next Page >

  • Minimizing data sent over a webservice call on expensive connection

    - by aceinthehole
    I am working on a system that has many remote laptops all connected to the internet through cellular data connections. The application will synchronize periodically to a central database. The problem is, due to factors outside our control, the cost to move data across the cellular networks are spectacularly expensive. Currently the we are sending a compressed XML file across the wire where it is being processed and various things are done with (mainly stuffing it into a database). My first couple of thoughts were to convert that XML doc to json, just prior to transmission and convert back to XML just after receipt on the other end, and get some extra compression for free without changing much. Another thought was to test various other compression algorithms to determine the smallest one possible. Although, I am not entirely sure how much difference json vs xml would make once it is compressed. I thought that their must be resources available that address this problem from an information theory perspective. Does anyone know of any such resources or suggestions on what direction to go in. This developed on the MS .net stack on windows for reference.

    Read the article

  • Data that has been deleted in P6, how is it updated in Analytics

    - by Jeffrey McDaniel
    In P6 Reporting Database 2.0 the ETL process looked to the refrdel table in the P6 PMDB to determine which projects were deleted. The refrdel table could not be cleared out between ETL runs or those deletes would be lost. After the ETL process is run the refrdel can be cleared out. It is important to keep any purging of the refrdel in a consistent cycle so the ETL process can pick up these deletes and process them accordingly.  In P6 Reporting Database 2.2 and higher the Extended Schema is used as the data source. In the Extended Schema, deleted data is filtered out by the views. The Extended Schema services will handle any interaction with the refrdel table, this concern with timing refrdel cleanup and ETL runs is not applicable as of this release. In the Extended Schema tables (ex. TaskX) there can still be deleted data present. The Extended Schema views join on the primary PMDB tables (ex. Task) and filter out any deleted data.  Any data that was deleted that remains in the Extended Schema tables can be cleaned out at a designated time by running the clean up procedure as documented in the P6 Extended Schema white paper. This can be run occasionally but is not necessary to run often unless large amounts of data has been deleted.

    Read the article

  • Class hierarchy problem in this social network model

    - by Gerenuk
    I'm trying to design a class system for a social network data model - basically a link/object system. Now I have roughly the following structure (simplified and only relevant methods shown) class Data: "used to handle the data with mongodb" "can link, unlink data and also return other linked data" "is basically a proxy object that only stores _id and accesses mongodb on requests" "it looks like {_id: ..., _out: [id1, id2,...], _inc: [id3, id4, ...]}" def get_node(self, id) "create a new Data object from the underlying mongodb" "each data object can potentially create a reference object to new mongo data" "this is needed when the data returns the linked objects" class Node: """ this class proxies linking calls to .data it includes additional network logic operations whereas Data only contains a basic database solution """ def __init__(self, data): "the infrastructure realization is stored as composition by an included object data" "Node bascially proxies most calls to the infrastructure object data" def get_node(self, data): "creates a new object of class Object or Link depending on data" class Object(Node): "can have multiple connections to Link" class Link(Node): "has one 'in' and one 'out' connection to an Object" This system is working, however maybe wouldn't work outside Python. Note that after reading links Now I have two questions here: 1) I want to infrastructure of the data storage to be replacable. Earlier I had Data as a superclass of Node so that it provided the neccessary calls. But (without dirty Python tricks) you cannot replace the superclass dynamically. Is using composition therefore recommended? The drawback is that I have to proxy most calls (link, unlink etc). Any thoughts? 2) The class Node contains the common method .get_node which is used to built new Object or Link instances after reading out the data. Some attribute of data decided whether the object which is only stored by id should be instantiated as an Object or Link class. The problem here is that Node needs to know about Object and Link in advance, which seems dodgy. Do you see a different solution? Both Object and Link need to instantiate one of all possible types depending on what the find in their linked data. Are there any other ideas how to implement a flexible Object/Link structure where the underlying database storage is isolated?

    Read the article

  • TCO Comparison: Oracle Exadata vs IBM P-Series

    - by Javier Puerta
    Cost Comparison for Business Decision-makersOracle Exadata Database Machine vs. IBM Power SystemsHow to Weigh a Purchase DecisionOctober 2012 Download full report here In this research-based  white paper conducted at the request of Oracle, The FactPoint Group compares the cost of ownership of the Oracle Exadata engineered system to a traditional build-your-own (BYO) solution, in this case an IBM Power 770 (P770) with SAN storage.  The IBM P770 was chosen given it is IBM’s current most popular model, based on FactPoint primary and secondary research and IBM claims, and because at least one of the interviewed customers had specifically migrated from a P770 to Exadata, affording us a more specific data point for comparison. This research found that Oracle Exadata: Can be deployed more quickly and easily requiring 59% fewer man-hours than a traditional IBM Power Systems solution. Delivers dramatically higher performance typically up to 12X improvement, as described by customers, over their prior solution.  Requires 40% fewer systems administrator hours to maintain and operate annually, including quicker support calls because of less finger-pointing and faster service with a single vendor.  Will become even easier to operate over time as users become more proficient and organize around the benefits of integrated infrastructure. Supplies a highly available, highly scalable and robust solution that results in reserve capacity that make Exadata easier for IT to operate because IT administrators can manage proactively, not reactively.  Overall, Exadata operations and maintenance keep IT administrators from “living on the edge.”  And it’s pre-engineered for long-term growth. Finally, compared to IBM Power Systems hardware, Exadata is a bargain from a total cost of ownership perspective:  Over three years, the IBM hardware running Oracle Database cost 31% more in TCO than Exadata.

    Read the article

  • OOW content for Pattern Matching....

    - by KLaker
    If you missed my sessions at OpenWorld then don't worry - all the content we used for pattern matching (presentation and hands-on lab) is now available for download. My presentation "SQL: The Best Development Language for Big Data?" is available for download from the OOW Content Catalog, see here: https://oracleus.activeevents.com/2013/connect/sessionDetail.ww?SESSION_ID=9101 For the hands-on lab ("Pattern Matching at the Speed of Thought with Oracle Database 12c") we used the Oracle-By-Example content. The OOW hands-on lab uses Oracle Database 12c Release 1 (12.1) and uses the MATCH_RECOGNIZE clause to perform some basic pattern matching examples in SQL. This lab is broken down into four main steps: Logically partition and order the data that is used in the MATCH_RECOGNIZE clause with its PARTITION BY and ORDER BY clauses. Define patterns of rows to seek using the PATTERN clause of the MATCH_RECOGNIZE clause. These patterns use regular expressions syntax, a powerful and expressive feature, applied to the pattern variables you define. Specify the logical conditions required to map a row to a row pattern variable in the DEFINE clause. Define measures, which are expressions usable in the MEASURES clause of the SQL query. You can download the setup files to build the ticker schema and the student notes from the Oracle Learning Library. The direct link to the example on using pattern matching is here: http://apex.oracle.com/pls/apex/f?p=44785:24:0::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:6781,2.

    Read the article

  • How to analyze data

    - by Subhash Dike
    We are working on an application that allows user to search/read some content in a particular domain. We wanted to add some capability in the app which can suggest user some content based on the usage pattern (analyze data based on frequency and relevance). Currently every time user search or read something we do store that information in backend database. We would like to use this data to present some additional content to user. Could someone explain what kind of tools will be required for such a job and any example? And what this concept is called, data analysis? data mining? business intelligence? or something else? Update: Sorry for being too broad, here is an example SQL Database (Just to give an idea, actual db is little different with normalization and stuff) Table: UserArticles Fields: UserName | ArticleId | ArticleTitle | DateVisited | ArticleCategory Table: CategoryArticles Fields: Category | Article Title | Author etc. One Category may have one more articles. One user may have read the same article multiple times (in this case we place additional entry in the user article table. Task: Use the information availabel in UserArticle table and rank categories in order which would be presented to user automatically in other part of application. Factors to be considered are frequency and recency. This might be possible through simple queries or may require specialized tools. Either way, the task is what mention above. I am not too sure which route to take, hence the question. Thoughts??

    Read the article

  • Tools for managing eCommerce backend

    - by rboarman
    I am working with an eCommerce company that has outgrown their hacked together backend for managing inventory, pricing and feeds to various shopping engines (Yahoo, 3d cart, Amazon, etc.). They currently manage about 12,000 skus and are doing $40M in revenue. Their internal people are working on a new Magento solution, but that is six months away and they need to replace/improve their current solution in order to hold them over. Their current solution was developed by two people who have left the company. What tools/architecture do other eCommerce sites use to manage their inventory, pricing, product descriptions and feed generation for the shopping engines? The current solution looks like this: 1) Inventory, pricing and product descriptions are maintained in a database and in NetSuite by employees 2) New products are added to the database via import 3) Twice a week data is extracted into a giant Excel spreadsheet 4) The Excel file adjusts pricing based on some simple algorithms 5) The Excel file exports about six different csv feeds which are manually uploaded to Amazon, 3d cart, Yahoo, Google and Merchant Advantage a. Each feed is a variant of the product which different field names and formatting b. Pricing levels differ between feeds c. Some products are not sent to all feeds 6) Orders are manually parsed and the inventory is adjusted as needed once product is sold The new solution should: 1) Import data from ODBC, CSV and NetSuite (CSV via ftp) 2) Apply pricing changes via simple algorithms (< $80 add $10, $200 add $25) 3) Ensure margins are being met 4) Format and generate a bunch of CSV and XML feeds 5) Perhaps upload feeds to shopping engines automatically What I need to do is replace the Excel file with something that is maintainable and automated. Something in the .Net stack is preferable but not mandatory. I’ve been looking at BizTalk but it may take too long to develop and deploy. Any suggestions?

    Read the article

  • Desktop application, dependency injection

    - by liori
    I am thinking of applying a real dependency injection library to my toy C#/GTK# desktop application. I chose NInject, but I think this is irrelevant to my question. There is a database object, a main window and several utility window classes. It's clear that I can inject the database into every window object, so here DI is useful. But does it make sense to inject utility window classes into other window classes? Example: I have classes such as: class MainWindow {…} class AddItemWindow {…} class AddAttachmentWindow {…} class BrowseItemsWindow {…} class QueryBuilderWindow {…} class QueryBrowserWindow {…} class PreferencesWindow {…} … Each of the utility classes can be opened from MainWindow. Some utility windows can also be opened from other utility windows. Generally, there might be a really complex graph of who can open whom. So each of those classes might need quite a lot of other window classes injected. I'm worried that such usage will go against the suggestion not to inject too many classes at once and become a code smell. Should I use some kind of a service locator object here?

    Read the article

  • How to motivate visitors to comment

    - by Michal
    At first I must apologize, because I am not sure if this question is valid for webmasters topic. I deal with the problem as being webmaster, however, i think this question is more related with marketing. Nevertheless, I was searching for marketing stack-overflow at meta stack-overflow and did not find such page. Background Four days ago, I launched a portal with database of barber salons at which people can find a salon through various criterions, see its photos, details, and also put a comment with their own opinion. The development took me half a year and it took me other 2 months to fill the database with information about barbers (I've also hired another three people to this job). I have not a big problem with getting people to my portal, I pay for PPC, comment on barber discussions etc.. In past four days I've reached a satisfactory number of visitors. Problem I deal with fact that everyone wants to search and read comments, but no one is willing to put her/his own opinion to barber. So I've tried following (2 days ago): Made comment anonymous, no one has to be afraid of compromise her/his identity with a salon owner I prepared a competition for users in which they can win a cosmetic package if they comment on at least three different salons I payed for PPC campaign on facebook which is telling people about the competition I registered competition on 20 portals for competitions And the result: People are commenting on facebook that the competition is a good idea They are giving likes on facebook But no one put a single comment to a barber salon I am getting little confused about what am I doing wrong. I will be thankful for any advice.

    Read the article

  • Using PDO with MVC

    - by mister martin
    I asked this question at stackoverflow and received no response (closed as duplicate with no answer). I'm experimenting with OOP and I have the following basic MVC layout: class Model { // do database stuff } class View { public function load($filename, $data = array()) { if(!empty($data)) { extract($data); } require_once('views/header.php'); require_once("views/$filename"); require_once('views/footer.php'); } } class Controller { public $model; public $view; function __construct() { $this->model = new Model(); $this->view = new View(); // determine what page we're on $page = isset($_GET['view']) ? $_GET['view'] : 'home'; $this->display($page); } public function display($page) { switch($page) { case 'home': $this->view->load('home.php'); break; } } } These classes are brought together in my setup file: // start session session_start(); require_once('Model.php'); require_once('View.php'); require_once('Controller.php'); new Controller(); Now where do I place my database connection code and how do I pass the connection onto the model? try { $db = new PDO('mysql:host='.DB_HOST.';dbname='.DB_DATABASE.'', DB_USERNAME, DB_PASSWORD); $db->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); } catch(PDOException $err) { die($err->getMessage()); } I've read about Dependency Injection, factories and miscellaneous other design patterns talking about keeping SQL out of the model, but it's all over my head using abstract examples. Can someone please just show me a straight-forward practical example?

    Read the article

  • Self Service Reporting With PowerPivot

    - by blakmk
    There are so many cool new features in Sql 2008 release 2 it was difficult for me to pick a topic for T-SQL Tuesday . But the one that I am now a secret fan of, I once resented for its creation. Let me explain, for years I have encountered reporting systems cobbled together in tools like Access and Excel built by "database hobbyists" who had no formal training in database design or best practices. They would take their monstrosities as far as they could go before ultimatley it stopped working or the person that wrote it left the company. At that point it would become the resident DBA's problem to support it as a Live application. So when I first heard of Power Pivot, a sense of Deja Vu overtook me and I felt like the guy in the Ausin Powers movie , knowing the inevitable is coming but somehow unsure how to get out of the way. But when I eventually saw it in action, I quickly realised that it is a very powerful tool. It has a much smaller "time to market" than traditional BI architectures. Combined with the new features of Excel, some pretty impressive dashboards can be produced.Of course PowerPivot is not a magic bullet and along with potential scalability issues there are the usual issues such as master data management and data quality that cannot be overcome easily with power pivot. As a tool though, it has potential. Traditional BI is expensive, both in terms of time and the amount of resources it takes to deliver the system. The time lag between an analyst or a commercial accountant requesting reports and the report being delivered can make a huge commercial difference. I have observed companies where empowered end users become extremely productive when allowed to plough in to various disperate datasets. It may not be the correct way or the most sustainable but its cheap and quick. In these times when budgets are being slashed and we are forced to deliver more with less, why not empower the end user in a tool that is designed for exactly this task.... @blakmk  

    Read the article

  • Honing Performance Tuning Skills on MySQL

    - by Antoinette O'Sullivan
    Get hands-on experience with techniques for tuning a MySQL Server with the Authorized MySQL Performance Tuning course.  This course is designed for database administrators, database developers and system administrators who are responsible for managing, optimizing, and tuning a MySQL Server. You can follow this live instructor led training: From your desk. Choose from among the 800+ events on the live-virtual training schedule. In a classroom. A selection of events/locations listed below  Location  Date  Delivery Language  Prague, Czech Republic  1 October 2012  Czech  Warsaw, Poland  9 July 2012  Polish  London, UK  19 November 2012  English  Rome, Italy  23 October 2012  Italian  Lisbon, Portugal  17 September 2012  European Portugese  Aix-en-Provence, France  4 September 2012  French  Strasbourg, France  16 October 2012  French  Nieuwegein, Netherlands  3 September 2012  Dutch  Madrid, Spain  6 August 2012  Spanish  Mechelen, Belgium  1 October 2012  English  Riga, Latvia  10 December 2012  Latvian  Petaling Jaya, Malaysia  10 September 2012  English  Edmonton, Canada  27 August 2012  English  Vancouver, Canada  27 August 2012  English  Ottawa, Canada  26 November 2012  English  Toronto, Canada  26 November 2012  English  Montreal, Canada  26 November 2012  English  Mexico City, Mexico  9 July 2012  Spanish  Sao Paulo, Brazil  2 July 2012  Brazilian Portugese To find a virtual or in-class event that suits you, go or http://oracle.com/education and choose a course and delivery type in your location.  

    Read the article

  • Backup those keys, citizen

    - by BuckWoody
    Periodically I back up the keys within my servers and databases, and when I do, I blog a reminder here. This should be part of your standard backup rotation – the keys should be backed up often enough to have at hand and again when they change. The first key you need to back up is the Service Master Key, which each Instance already has built-in. You do that with the BACKUP SERVICE MASTER KEY command, which you can read more about here. The second set of keys are the Database Master Keys, stored per database, if you’ve created one. You can back those up with the BACKUP MASTER KEY command, which you can read more about here. Finally, you can use the keys to create certificates and other keys – those should also be backed up. Read more about those here. Anyway, the important part here is the backup. Make sure you keep those keys safe! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Microsoft Access as a Weapon of War

    - by Damon Armstrong
    A while ago (probably a decade ago, actually) I saw a report on a tracking system maintained by a U.S. Army artillery control unit.  This system was capable of maintaining a bearing on various units in the field to help avoid friendly fire.  I consider the U.S. Army to be the most technologically advanced fighting force on Earth, but to my terror I saw something on the title bar of an application displayed on a laptop behind one of the soldiers they were interviewing: Tracking.mdb Oh yes.  Microsoft Office Suite had made it onto the battlefield.  My hope is that it was just running as a front-end for a more proficient database (no offense Access people), or that the soldier was tracking something else like KP duty or fantasy football scores.  But I could also see the corporate equivalent of a pointy-haired boss walking into a cube and asking someone who had piddled with Access to build a database for HR forms.  Except this pointy-haired boss would have been a general, the cube would have been a tank, and the HR forms would have been targets that, if something went amiss, would have been hit by a 500lb artillery round. Hope that solider could write a good query

    Read the article

  • Microsoft Access as a Weapon of War

    - by Damon
    A while ago (probably a decade ago, actually) I saw a report on a tracking system maintained by a U.S. Army artillery control unit.  This system was capable of maintaining a bearing on various units in the field to help avoid friendly fire.  I consider the U.S. Army to be the most technologically advanced fighting force on Earth, but to my terror I saw something on the title bar of an application displayed on a laptop behind one of the soldiers they were interviewing: Tracking.mdb Oh yes.  Microsoft Office Suite had made it onto the battlefield.  My hope is that it was just running as a front-end for a more proficient database (no offense Access people), or that the soldier was tracking something else like KP duty or fantasy football scores.  But I could also see the corporate equivalent of a pointy-haired boss walking into a cube and asking someone who had piddled with Access to build a database for HR forms.  Except this pointy-haired boss would have been a general, the cube would have been a tank, and the HR forms would have been targets that, if something went amiss, would have been hit by a 500lb artillery round. Hope that solider could write a good query :)

    Read the article

  • Heading Out to Oracle Open World

    - by rickramsey
    In case you haven't figured it out by now, Oracle reserves an awful lot of announcements for Oracle Open World. As a result, the show is always a lot of fun for geeks. What will the Oracle Solaris team have to say? Will the Oracle Linux team have any surprises? And what about Oracle hardware? For my part, I'll be one of the lizards at the OTN Lounge with the OTN crew, handing out t-shirts to system admins and developers, or anyone who is willing to impersonate one. I understand, not everyone can have the raw animal magnetism of a sysadmin, or the debonair sophistication of a C++ developer, so some of you have no choice but to pretend. I won't judge. I'll also be doing video interviews of as many techie people as I can corner. I've got more than 30 interviews already scheduled. Most of them will be 3-5 minutes long. I'll be asking our best technical minds what's cool about their latest technologies and what impact it will have on system admins or system developers. I'll be posting those videos here: Find OTN Systems Videos from Oracle Open World Here! We've got some great topics in mind. A dummies guide to hardware-assisted cryptography with Glenn Brunette. ZFS deduplication. The momentum building around Oracle Solaris 11, with Lynn Rohrer, plus conversations with partners who have deployed Oracle Solaris 11. Migrating to Oracle Database with SQL Developer. The whole database cloud thing. Oracle VM and, of course, Oracle Linux. So even if you can't be part of the fun, keep an eye out for the videos on our YouTube channel. - Rick Website Newsletter Facebook Twitter

    Read the article

  • NHibernate Tools: Visual NHibernate

    - by Ricardo Peres
    You probably know that I’m a big fan of Slyce Software’s Visual NHibernate. To me, it is the best tool for generating your entities and mappings from an existing database (it also allows you to go the other way, but I honestly have never used it that way). What I like most about it: Great support: folks at Slyce always listen to your suggestions, give you feedback in a timely manner, and I was even lucky enough to have some of my suggestions implemented! The templating engine, which is very powerful, and more user-friendly than, for example, MyGeneration’s; one of the included templates is Sharp Architecture; Advanced model validations: it even warns you about having lazy properties declared in non-lazy entities; Integration with NHibernate Validator and generation of validation rules automatically based on the database, or on user-defined model settings; The designer: they opted for not displaying all entities in a single screen, which I think was a good decision; has support for all inheritance strategies (table per class hierarchy, table per class, table per concrete class); Generation of FluentNHibernate mappings as well as hbm.xml. I could name others, but… why don’t you see for yourself? There is a demo version available for downloading. By the way, I am in no way related to Slyce, I just happen to like their software!

    Read the article

  • How to show or direct a business analyst to do data modelling?

    - by AaronLS
    Our business analysts pushed hard to collect data through a spreadsheet. I am the programmer responsible for importing that data. Usually when they push hard for something like this, I never know how well it will work out until a few weeks later when I have time assigned to work on the task of programming the import of the data. I have tried to do as much as possible along the way, named ranges, data validations, etc. But I usually don't have time to take a detailed look at all the data and compare to the destination in the database to determine how well it matches up. A lot of times there will be maybe a little table of items that somehow I have to relate to something else in the database, but there are not natural or business keys present that would allow me to do so. Make the best of this, trying to write something that can compare strings and make a best guess at it and then go through the effort of creating interfaces for a user to match the imported data to the destination. I feel like if the business analyst was actually creating a data model, they would be forced to think about these relationships, and have an appreciation for the need of natural or business keys to be part of the spreadsheet for the purposes of smoothly importing the data. The closest they come to business analysis is a big flat list of fields, and that would be fine if it were like any other data dictionary and include data types+relationships, but it isn't. They are just a bunch of names. No indication of what type of data they might hold, and it is up to me to guess. When I have pushed for more detail, they say that it is just busy work. How can I explain the importance of data modelling? How can I tell them what it is and how to do it? It feels impossible, because they don't have an appreciation for its importance. They do however, usually have an interest in helping out in whatever way they can, it's just this in particular has never gotten a motivated response.

    Read the article

  • EPM Architecture: Reporting and Analysis

    - by Marc Schumacher
    Reporting and Analysis is the basis for all Oracle EPM reporting components. Through the Java based Reporting and Analysis web application deployed on WebLogic, it enables users to browse through reports for all kind of Oracle EPM reporting components. Typical users access the web application by browser through Oracle HTTP Server (OHS). Reporting and Analysis Web application talks to the Reporting and Analysis Agent using CORBA protocol on various ports. All communication to the repository databases (EPM System Registry and Reporting and Analysis database) from web and application layer is done using JDBC. As an additional data store, the Reporting and Analysis Agent uses the file system to lay down individual reports. While the reporting artifacts are stored on the file system, the folder structure and report based security information is stored in the relational database. The file system can be either local or remote (e.g. network share, network file system). If an external user directory is used, Reporting and Analysis services also communicate to this directory. The next post will cover WebAnalysis.

    Read the article

  • CodePlex Daily Summary for Thursday, September 27, 2012

    CodePlex Daily Summary for Thursday, September 27, 2012Popular ReleasesVisual Studio Icon Patcher: Version 1.5.2: This version contains no new images from v1.5.1 Contains the following improvements: Better support for detecting the installed languages The extract & inject commands won’t run if Visual Studio is running You may now run in extract or inject mode The p/invoke code was cleaned up based on Code Analysis recommendations When a p/invoke method fails the Win32 error message is now displayed Error messages use red text Status messages use green textMCEBuddy 2.x: MCEBuddy 2.2.16: Changelog for 2.2.16 (32bit and 64bit) Now a standalone remote client also available to control the Engine remotely. 1. Added support for remote connections for status and configuration. MCEBuddy now uses port 23332. The remote server name, remote server port and local server port can be updated from the MCEBuddy.conf file BUT the Service or GUI needs to be restarted (i.e. reboot or restart service or restart program) for it to take effect. Refer to documentation for more details http://mce...LoLHQ - Personal League of Legends Assistant: LoLHQ v1.0: LoLHQ version 1.0 (full, portable) Instructions: - Download - Extract - Run LoLHQ.exe - If needed, specify League of Legends installation path.D3 Loot Tracker: 1.3.1 (patch): - Magic find value will now display properly and includes follower value. - ILevel of legendary items will now display properly.ZXing.Net: ZXing.Net 0.9.0.0: On the way to a release 1.0 the API should be stable now with this version. sync with rev. 2393 of the java version improved api better Unity support Windows RT binaries Windows CE binaries new Windows Service demo new WPF demoSSIS GoogleAnalyticsSource: Version 1.1 Alpha 2: The component uses now the Google API V2.4 including the management API.Ext Spec: Ext Spec 1.0.0: Completed remaining tasks: 7 12 17The GLMET Project: Sound Recorder: --WinRtBehaviors: V1.0.2: Includes simple Blend SupportMVC Bootstrap: MVC Boostrap 0.5.1: A small demo site, based on the default ASP.NET MVC 3 project template, showing off some of the features of MVC Bootstrap. This release uses Entity Framework 5 for data access and Ninject 3 for dependency injection. If you download and use this project, please give some feedback, good or bad!Simple PM - Project Management Simplified !: Simple Pm v1 - Aplha (Re-released): INSTALLATION GUIDE 1. Run the web setup, which will install the web app to IIS. 2. Make sure you select your application pool to "ASP.NET v4.0" during the installation. 3. Create a database named "SimplePm" 4. Run the attached database script on this database. 5. Change the database username and password in connection strings defined as SimplePmEntities and ApplicationServices from the Web.config file 6. Thats its ! Simple Pm are ready to go ! For any installation assistance feel free to c...menu4web: menu4web 1.0 - free javascript menu for web sites: menu4web 1.0 has been tested with all major browsers: Firefox, Chrome, IE, Opera and Safari. Minified m4w.js library is less than 9K. Includes 21 menu examples of different styles. Can be freely distributed under The MIT License (MIT).Rawr: Rawr 5.0.0: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...Coevery - Free CRM: Coevery 1.0.0.26: The zh-CN issue has been solved. We also add a project management module.VidCoder: 1.4.1 Beta: Updated to HandBrake 4971. This should fix some issues with stuck PGS subtitles. Fixed build break which prevented pre-compiled XML serializers from showing up. Fixed problem where a preset would get errantly marked as modified when re-opening the encode settings window or importing a new preset.JSLint for Visual Studio 2010: 1.4.0: VS2012 support is alphaBlackJumboDog: Ver5.7.2: 2012.09.23 Ver5.7.2 (1)InetTest?? (2)HTTP?????????????????100???????????Player Framework by Microsoft: Player Framework for Windows 8 (Preview 6): IMPORTANT: List of breaking changes from preview 5 Added separate samples download with .vsix dependencies instead of source dependencies Support for FreeWheel SmartXML ad responses Support for Smooth Streaming SDK DownloaderPlugins Support for VMAP and TTML polling for live scenarios Support for custom smooth streaming byte stream and scheme handlers Support for new play time and position tracking plugin Added IsLiveChanged event Added AdaptivePlugin.MaxBitrate property Add...WPF Application Framework (WAF): WPF Application Framework (WAF) 2.5.0.8: Version: 2.5.0.8 (Milestone 8): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Changelog Legend: [B] Breaking change; [O] Marked member as obsolete WAF: Mark the class DataModel as serializable. InfoMan: Minor improvements. InfoMan: Add unit tests for all modules. Othe...LogicCircuit: LogicCircuit 2.12.9.20: Logic Circuit - is educational software for designing and simulating logic circuits. Intuitive graphical user interface, allows you to create unrestricted circuit hierarchy with multi bit buses, debug circuits behavior with oscilloscope, and navigate running circuits hierarchy. Changes of this versionToolbars on text note dialog are more flexible now. You can select font face, size, color, and background of text you are typing. RAM now can be initialized to one of the following: random va...New ProjectsAltairis Binary Store Provider: Altairis Binary Store is provider based system for storing arbitrary binary data either in file system or in Azure Blob Storage.Azure Diagnostic Log Viewer: This tool helps in viewing logs written in azure diagnostics log tables. <MORE DETAILS COMING SOON>bluequiz: Là d? án giúp các cá nhân, t? ch?c, tru?ng h?c, t? ch?c thi tr?c nghi?m tr?c tuy?n và qu?n lý di?m m?t cách d? dàng hon.Detect User Geo location: Detect site visitor geo location Detect geo location of the site visitors by html5 geolocation (first) or ipinfodb.com(second choice). Educards: Educational cards.EuWangSSO: SSO (single sign-on) solutionFree Aspx Image Gallery: This is first basic release of my free aspx image gallery project. It is free to use and modify by the user without any need of providing any credit to me.GEFe: GEFeguohai's project: a open source project!ISMOT - Async Helper: This is a wrapper library for Microsoft's Async Library (CTP). Using this library greatly facilitates async calls in Silverlight and Windows Phone projects.LoLHQ - Personal League of Legends Assistant: LoLHQ - Personal League of Legends Assistant Check detailed ELO ratings and statistics of your teammates. Personalize counter-pick lists, view Champions detailsMNK-HKM addon: Special thanks to: Kill Us Or Die TryingPCV_CLINIC: my project PCV_ClinicRemote Commands: Program which allow to control computer sending e-mails with commands.Resource File Comparer: Resource File Comparer is a quick utility to help compare the availability of required resource strings in all resource files for an application.Rich Client Template: The new Rich Client Template for Visual Studio 2012Simple Microsoft Excel Document Converter (Convert To XLS, XLSX, PDF, XPS): This library is based on Microsoft.Office.Interop.Excel. This library can convert Excel documents to PDF or XPS in C#.TCC News: testeTesPro: ???The Eggbert Chronicles: The Eggbert Chronicles, a 2.5D platformer built in Unity and created using Javascript and C#.Tombola XNA: Gioco della tombola utilizzando XNA.Tools and Company: "Tools and Company" is a DLL which contains a set of useful and generic classes and functions in C# language. I use this DLL in my different other C# projects.TradeIt: :)usermade: user made websiteWIT Sync Manager 2010: Enables you to synchronize virtually all aspects of work item types and related artifacts across collections and projects in TFS 2010.xebictetris: Xebic Tetris Web Testxmwms: xmwmsXNA for Windows 8(Windows RT).: Officially there is no support for XNA Game Studio in Windows 8(WinRT). So this is a light weight XNA Framework made using the .NET 4.5 and Metro Style APIs and????????????? ?????????: ???????? ?????? ?? ???????? ?????????????? ??????????

    Read the article

  • Coda-like experience for Ubuntu

    - by Dillon Gilmore
    I'm a web developer who's going to transition from using Mac OS X to Ubuntu. I've been using Coda for some time, only because it makes web development easy. I know a full fledged app isn't available for Linux, but would like to know about apps that specialize in the same tasks that Coda offers. I plan on switching to Vim for code editing, I'm extremely proficient and will install the Janus plugin and be good to go for editing code. One thing that makes editing on Coda so amazing is its extremely good at SFTP, you can drag and drop files and/or folders from your local drive to the server. Also, you can edit code directly on the server. The problem here, is that using Vim I don't know of a way to edit code on a remote server, while using my own Vim settings and plugins. To solve this, I would like to know of a good SFTP client OR a good SFTP CLI. A CLI that could synchronize your files after a file has been modified would be perfect, but not necessary. Now, one of the biggest and best features of Coda is its ability to view your databases. You get to create a database, create tables, add stuff, delete stuff and view the contents of the table (all this without writing a single SQL statement). I will admit that databases are my weak point, but is a very important part of my job. If there is a tool that specializes in databases would be perfect. I wouldn't prefer to use the command line for database stuff, but if there is a CLI for databases that I'm missing could potentially be useful. So I guess I'm asking for two things. A tool that makes databases easier to visualize and a tool that assists in pushing my local code to a server.

    Read the article

  • Custom field names in Rails error messages

    - by Madhan ayyasamy
    The defaults in Rails with ActiveRecord is beautiful when you are just getting started and are created everything for the first time. But once you get into it and your database schema becomes a little more solidified, the things that would have been easy to do by relying on the conventions of Rails require a little bit more work.In my case, I had a form where there was a database column named “num_guests”, representing the number of guests. When the field fails to pass validation, the error messages is something likeNum guests is not a numberNot quite the text that we want. It would be better if it saidNumber of guests is not a numberAfter doing a little bit of digging, I found the human_attribute_name method. You can override this method in your model class to provide alternative names for fields. To change our error message, I did the followingclass Reservation ... validates_presence_of :num_guests ... HUMAN_ATTRIBUTES = { :num_guests = "Number of guests" } def self.human_attribute_name(attr) HUMAN_ATTRIBUTES[attr.to_sym] || super endendSince Rails 2.2, this method is used to support internationalization (i18n). Looking at it, it reminds me of Java’s Resource Bundles and Spring MVC’s error messages. Messages are defined based off a key and there’s a chain of look ups that get applied to resolve an error’s message.Although, I don’t see myself doing any i18n work in the near-term, it is cool that we have that option now in Rails.

    Read the article

  • New CAM Editor v2.3 with Open-XDX for Open Data APIs

    - by drrwebber
    Creating actual working XML exchanges, loading data from data stores, generating XML, testing, integrating with web services and then deployment delivery takes a lot of coding and effort. Then writing the documentation, models, schema and doing naming and design rule (NDR) checks and packaging all this together (such as for NIEM IEPD use). What if there was a tool that helped you do all that easily and simply? Welcome to the new Open-XDX and the CAM Editor! Open-XDX uses code-free techniques in combination with CAM templates and visual drag and drop to rapidly design your XML exchange. Then Open-XDX will automatically generate all the SQL for you, read the database data, generate and populate the valid output XML, and filter with parameters. To complete the processing solution Open-XDX works with web services and JDBC database connections as a callable module that can be deployed plug and play with your middleware stack, all with just a few lines of Java code (about 5 actually). You can build either Query/Response or Publish/Subscribe services from existing data stores to XML literally in minutes. To see a demonstration of using Open-XDX, a MySQL data store and integrating with Oracle Web Logic server please see this short few minutes video - http://youtube.com/user/TheCameditor There is also a Quick Guide available that provides more technical insights along with a sample pack download of templates and SQL that you can try for yourself. Head on over to our project resource site to learn more, download the latest CAM Editor and see links to all the resources and materials. We look forward to seeing how the developer community is able to jump start information sharing initiatives using this new innovative approach.

    Read the article

  • From a DDD perspective is a report generating service a domain service or an infrastructure service?

    - by Songo
    Let assume we have the following service whose responsibility is to generate Excel reports: class ExcelReportService{ public String generateReport(String fileFormatFilePath, ResultSet data){ ReportFormat reportFormat = new ReportFormat(fileFormatFilePath); ExcelDataFormatterService excelDataFormatterService = new ExcelDataFormatterService(); FormattedData formattedData = excelDataFormatterService.format(data); ExcelFileService excelFileService = new ExcelFileService(); String reportPath= excelFileService.generateReport(reportFormat,formattedData); return reportPath; } } This is pseudo code for the service I want to design where: fileFormatFilePath: path to a configuration file where I'll keep the format of my excel file (headers, column widths, number of columns,..etc) data: the actual records returned from the database. This data can't be used directly coz I might need to make further calculations to the data before inserting them to the excel file. ReportFormat: Value object to hold the report format, has methods like getHeaders(), getColumnWidth(),...etc. ExcelDataFormatterService: a service to hold any logic that need to be applied to the data returned from the database before inserting it to the file. FormattedData: Value object the represents the formatted data to be inserted. ExcelFileService: a wrapper top the 3rd party library that generates the excel file. Now how do you determine whether a service is an infrastructure or domain service? I have the following 3 services here: ExcelReportService, ExcelDataFormatterService and ExcelFileService?

    Read the article

  • SQL Server DBA - How to get a good one!

    - by ETFairfax
    I'm a lone developer. I am currently developing an application which is seeing me get way way way out of my depth when it comes to SQL DBA'ing, and have come to realise that I should hire a DBA to help me (which has full support from the company). Problem is - who? This SO thread sees someone hire a DBA only to realise that they will probably cause more harm then good! Also, I have just had a bad experience with a ASP.NET/C# contractor that has let us down. So, can anyone out there on SO either... a) Offer their services. b) Forward me onto someone that could help. c) Give some tips on vetting a DBA. I know this isn't a recruitment site, so maybe some good answers for c) would be a benefit for other readers!! BTW: The database is SQL Server 2008. I'm running into performance issues (mainly timeouts) which I think would be sorted out by some proper indexing. I would also need the DBA to provide some sort of maintenance plan, and to review how our database will deal what we intend at throwing at it in the future!

    Read the article

< Previous Page | 609 610 611 612 613 614 615 616 617 618 619 620  | Next Page >