Search Results

Search found 28024 results on 1121 pages for 'sql 2014'.

Page 982/1121 | < Previous Page | 978 979 980 981 982 983 984 985 986 987 988 989  | Next Page >

  • Methodologies for Managing Users and Access?

    - by MadBurn
    This is something I'm having a hard time getting my head around. I think I might be making it more complicated than it is. What I'm trying to do is develop a method to store users in a database with varying levels of access, throughout different applications. I know this has been done before, but I don't know where to find how they did it. Here is an example of what I need to accomplish: UserA - Access to App1, App3, App4 & can add new users to App3, but not 4 or 1. UserB - Access to App2 only with ReadOnly access. UserC - Access to App1 & App4 and is able to access Admin settings of both apps. In the past I've just used user groups. However, I'm reaching a phase where I need a bit more control over each individual user's access to certain parts of the different applications. I wish this were as cut and dry as being able to give a user a role and let each role inherit from the last. Now, this is what I need to accomplish. But, I don't know any methods of doing this. I could easily just design something that works, but I know this has been done and I know this has been studied and I know this problem has been solved by much better minds than my own. This is for a web application and using sql server 2008. I don't need to store passwords (LDAP) and the information I need to store is actually very limited. Basically just username and access.

    Read the article

  • Testing Workflows &ndash; Test-First

    - by Timothy Klenke
    Originally posted on: http://geekswithblogs.net/TimothyK/archive/2014/05/30/testing-workflows-ndash-test-first.aspxThis is the second of two posts on some common strategies for approaching the job of writing tests.  The previous post covered test-after workflows where as this will focus on test-first.  Each workflow presented is a method of attack for adding tests to a project.  The more tools in your tool belt the better.  So here is a partial list of some test-first methodologies. Ping Pong Ping Pong is a methodology commonly used in pair programing.  One developer will write a new failing test.  Then they hand the keyboard to their partner.  The partner writes the production code to get the test passing.  The partner then writes the next test before passing the keyboard back to the original developer. The reasoning behind this testing methodology is to facilitate pair programming.  That is to say that this testing methodology shares all the benefits of pair programming, including ensuring multiple team members are familiar with the code base (i.e. low bus number). Test Blazer Test Blazing, in some respects, is also a pairing strategy.  The developers don’t work side by side on the same task at the same time.  Instead one developer is dedicated to writing tests at their own desk.  They write failing test after failing test, never touching the production code.  With these tests they are defining the specification for the system.  The developer most familiar with the specifications would be assigned this task. The next day or later in the same day another developer fetches the latest test suite.  Their job is to write the production code to get those tests passing.  Once all the tests pass they fetch from source control the latest version of the test project to get the newer tests. This methodology has some of the benefits of pair programming, namely lowering the bus number.  This can be good way adding an extra developer to a project without slowing it down too much.  The production coder isn’t slowed down writing tests.  The tests are in another project from the production code, so there shouldn’t be any merge conflicts despite two developers working on the same solution. This methodology is also a good test for the tests.  Can another developer figure out what system should do just by reading the tests?  This question will be answered as the production coder works there way through the test blazer’s tests. Test Driven Development (TDD) TDD is a highly disciplined practice that calls for a new test and an new production code to be written every few minutes.  There are strict rules for when you should be writing test or production code.  You start by writing a failing (red) test, then write the simplest production code possible to get the code working (green), then you clean up the code (refactor).  This is known as the red-green-refactor cycle. The goal of TDD isn’t the creation of a suite of tests, however that is an advantageous side effect.  The real goal of TDD is to follow a practice that yields a better design.  The practice is meant to push the design toward small, decoupled, modularized components.  This is generally considered a better design that large, highly coupled ball of mud. TDD accomplishes this through the refactoring cycle.  Refactoring is only possible to do safely when tests are in place.  In order to use TDD developers must be trained in how to look for and repair code smells in the system.  Through repairing these sections of smelly code (i.e. a refactoring) the design of the system emerges. For further information on TDD, I highly recommend the series “Is TDD Dead?”.  It discusses its pros and cons and when it is best used. Acceptance Test Driven Development (ATDD) Whereas TDD focuses on small unit tests that concentrate on a small piece of the system, Acceptance Tests focuses on the larger integrated environment.  Acceptance Tests usually correspond to user stories, which come directly from the customer. The unit tests focus on the inputs and outputs of smaller parts of the system, which are too low level to be of interest to the customer. ATDD generally uses the same tools as TDD.  However, ATDD uses fewer mocks and test doubles than TDD. ATDD often complements TDD; they aren’t competing methods.  A full test suite will usually consist of a large number of unit (created via TDD) tests and a smaller number of acceptance tests. Behaviour Driven Development (BDD) BDD is more about audience than workflow.  BDD pushes the testing realm out towards the client.  Developers, managers and the client all work together to define the tests. Typically different tooling is used for BDD than acceptance and unit testing.  This is done because the audience is not just developers.  Tools using the Gherkin family of languages allow for test scenarios to be described in an English format.  Other tools such as MSpec or FitNesse also strive for highly readable behaviour driven test suites. Because these tests are public facing (viewable by people outside the development team), the terminology usually changes.  You can’t get away with the same technobabble you can with unit tests written in a programming language that only developers understand.  For starters, they usually aren’t called tests.  Usually they’re called “examples”, “behaviours”, “scenarios”, or “specifications”. This may seem like a very subtle difference, but I’ve seen this small terminology change have a huge impact on the acceptance of the process.  Many people have a bias that testing is something that comes at the end of a project.  When you say we need to define the tests at the start of the project many people will immediately give that a lower priority on the project schedule.  But if you say we need to define the specification or behaviour of the system before we can start, you’ll get more cooperation.   Keep these test-first and test-after workflows in your tool belt.  With them you’ll be able to find new opportunities to apply them.

    Read the article

  • Benefits of Server-side Coding

    There are numerous advantages to server scripting languages over client side languages in regards to creating web sites that are more compelling compared to a standard static site. Server side scripting are scripts that are executed on a web server during the compilation of data to return to a client. These scripts allow developers to modify the content that is being sent to the user prior to the return of the data to the user as well as store information about the user. In addition, server side scripts allow for a controllable environment in which they can be executed. This cannot be said for client side languages because the developer cannot control the users’ environment compared to a web server. Some users may turn off client scripts, some may be only allow limited access on the system and others may be able to gain full control of the environment.  I have been developing web applications for over 9 years, and I have used server side languages for most of the applications I have built.  Here is a list of common things I have developed with server side scripts. List of Common Generic Functionality Send Email FTP Files Security/ Access Control Encryption URL rewriting Data Access Data Creation I/O Access The one important feature server side languages will help me with on my website is Data Access because my component will be backed with a SQL server database. I believe that form validation is one instance where I might see server-side scripts and JavaScript used interchangeably because it does not matter how or where the data is validated as long as the data that gets inserted is valid. However, I would have to say that my personal experience would have to sway me in deciding what type of languages to use for form validation because they both have advantages and disadvantages based on the each situation.

    Read the article

  • Automate the backup of my databases and files with cron

    - by Patrick
    hi, I want to automate the backup of my databases and files with cron. Should I add the following lines to crontab ? mysqldump -u root -pPASSWORD database_name | gzip > /home/backup/database_`date +\%m-\%d-\%Y`.sql.gz svn commit -m "Committing the working copy containing the database dump" First of all, is this a good approach? It is not clear how to specify the repository and the working copy with svn? How can I run svn only when the mysqldump is done and not before ? Avoiding conflicts

    Read the article

  • sqlplus: Running "set lines" and "set pagesize" automatially

    - by katsumii
    This is a followup to my previous entry. Using the full tty real estate with sqlplus (INOUE Katsumi @ Tokyo) 'rlwrap' is widely used for adding 'sqlplus' the history function and command line editing. Here's another but again kludgy implementation. First this is the alias. alias sqlplus="rlwrap -z ~/sqlplus.filter sqlplus" And this is the file content. #!/usr/bin/env perl use lib ($ENV{RLWRAP_FILTERDIR} or "."); use RlwrapFilter; use POSIX qw(:signal_h); use strict; my $filter = new RlwrapFilter; $filter -> prompt_handler(\&prompt); sigprocmask(SIG_UNBLOCK, POSIX::SigSet->new(28)); $SIG{WINCH} = 'winchHandler'; $filter -> run; sub winchHandler { $filter -> input_handler(\&input); sigprocmask(SIG_UNBLOCK, POSIX::SigSet->new(28)); $SIG{WINCH} = 'winchHandler'; $filter -> run; } sub input { $filter -> input_handler(undef); return `resize |sed -n "1s/COLUMNS=/set linesize /p;2s/LINES=/set pagesize /p"` . $_; } sub prompt { if ($_ =~ "SQL> ") { $filter -> input_handler(\&input); $filter -> prompt_handler(undef); } return $_; } I hope I can compare these 2 implementations after testing more and getting some feedbacks.

    Read the article

  • Determine Server specs for a Rails with MySQL database (on AWS)

    - by Rogier
    I developed a intranet applications with Rails (3.2) for one of my customers. There will be around 30-40 employees working with it. Backend is MySQL (5). What would be the best way to determine the servers specs needed? Given: max. load will be roughly 2400 (40*60) HTTP requests (mixed GET / POST) per hour. 15% of these calls are JSON calls (iOS) avg request will make between 5-10 database calls 500-800 SQL INSERTS per day webpages are fairly simple (no images, just text) avg webpage is 15 request (css/js/etc) and total size is 35-45 KB More specific, since they need access from multiple geographical locations, we are thinking of running a bitnami Ruby stack in the AWS cloud (uptime is important). Any thoughts on a AWS Instance (small/medium) and Utilization (light/medium/heavy) ? Thanks!

    Read the article

  • Writing Web "server less" applications

    - by crodjer
    TL;DR What are the prospects of write applications which are completely based on a REST database server (CouchDB) and web applications which directly access the DB instead of having a web server in between? I recently started looking up some NoSQL databases. MongoDB seems to be a popular choices. I also liked the project. But I personally liked the REST interface of CouchDB. So what I wanted to know is if there was the possibility of applications (maybe cached apps in web browser, a chrome extension etc.) which could just just query the database directly with no requirement of a webserver in between. All the computational logic would reside in the client application and the database will do what it does, CRUD. Since mostly (I don't know which doesn't) client frameworks support REST quaries, it could be a good way writing applications well optimized for respective framework. These applications though won't be doing complicated computation, but still provide enough functionality which could replace lots of conventional applications. Are existing resources and projects which would help me move towards writing such applications and also the scope and moving towards developing in this way? Are their any technical/security issues with this? This post will help me decide to look into project like CouchDB (and maybe Dive into Erlang later) or stay with the conventional frameworks (like django) and SQL databases. Update A specific point of such apps I had in mind is creation of offline applications just by replicating couchdb data on client.

    Read the article

  • Create MSDB Folders Through Code

    You can create package folders through SSMS, but you may also wish to do this as part of a deployment process or installation. In this case you will want programmatic method for managing folders, so how can this be done? The short answer is, go and look at the table msdb.dbo. sysdtspackagefolders90. This where folder information is stored, using a simple parent and child hierarchy format. To add new folder directly we just insert into the table - INSERT INTO dbo.sysdtspackagefolders90 ( folderid ,parentfolderid ,foldername) VALUES ( NEWID() -- New GUID for our new folder ,<<Parent Folder GUID>> -- Lookup the parent folder GUID if a child or another folder, or use the root GUID 00000000-0000-0000-0000-000000000000 ,<<Folder Name>>) -- New folder name There are also some stored procedures - sp_dts_addfolder sp_dts_deletefolder sp_dts_getfolder sp_dts_listfolders sp_dts_renamefolder To add a new folder to the root we could call the sp_dts_addfolder to stored procedure - EXEC msdb.dbo.sp_dts_addfolder @parentfolderid = '00000000-0000-0000-0000-000000000000' -- Root GUID ,@name = 'New Folder Name The stored procedures wrap very simple SQL statements, but provide a level of security, as they check the role membership of the user, and do not require permissions to perform direct table modifications.

    Read the article

  • Real-time stock market application

    - by Sam
    I'm an amateur programmer. I'd like to develop a software application (like Tradestation), to analyse real-time market data. Please teach me if the following approach is correct, ie the procedures, knowledge or software needed etc: Use a DB to read the real-time feed from data provider: what should be the right DB to use? I know it should be a time serious one. Can I use SQL, Mysql, or others? What database can receive real-time data feed? Do I need to configure the DB to do this? If the real-time data is in ASCII form, how can it be converted to those that can be read by the DB and my application? Should I have to write codes or just use some add-ins? What kind of add-in are needed? How should I code the program to retrieve the changing data from the DB so that the analysis software screen data can also change asynchronously? (like the RTD in excel) Which aspects of programming do I need to learn to develop the above? Are there web resources/ books I can refer to for more information?

    Read the article

  • Creating, using and managing XML component dictionaries quick tutorials

    - by drrwebber
    XML Component Dictionary capabilities are provided in conjunction with the CAM Editor toolset.  These dictionaries accelerate the development of consistent XML information exchanges using standard sets of dictionary components. The quick tutorials are aimed at showing the 'how to' of the basic capabilities to jump start use of XML dictionaries with the CAM Editor. The collection of dictionary tutorials videos run for a total of approximately 20 minutes.  Each video can be reviewed individually also. Learn how to use the dictionary functions to create dictionaries by harvesting data model components from existing XSD schema, SQL database table schema, or simple Excel / Open Office spreadsheets with tables of components listed.Also included are tips and functions relating to use of NIEM exchange development, IEPD and EIEM techniques.These videos should be viewed in conjunction with reviewing the overall concepts and techniques described in the companion video on the CAM Editor and Dictionaries overview.  The approach is aligned with OASIS and Core Components Technical Specification (CCTS) standards specifications for XML components and dictionaries.Dictionary collections can be stored locally on the file system, or local network, or collaboratively on the web or cloud deployment, or can be shared and managed securely using the Oracle Enterprise Repository (OER) tool. Also included are techniques relating to the use of the NIEM approach for developing XML exchange schema and IEPD packages.  This includes generating reuse scores, wantlist, and cross reference spreadsheets. Included in the latest release of the CAM Editor is the ability to use the analyse dictionary tool to determine duplicate components, conflicting component definitions, missing component descriptions and so on.  This ensures high quality dictionary component specifications.  Using the CAM Editor you can also create MindMap models and UML physical models of your dictionary components sets. For a complete guide to using the CAM Editor see the main YouTube video tutorials website and the CAM Editor website.

    Read the article

  • OS X can't resolve localhost suddenly

    - by Conor
    Last week I fired up a website that I'm currently developing locally only to find out that it wasn't working as it was the night before, (or at all). After an inital stage of panic and 'what did I do' moments... I deduced the problem down to the fact that my OS X now wont resolve localhost properly, so connections to my SQL database were failing. I can still ping localhost in the terminal, but in order to get my websites up and running again, I had to change all the localhost entries to 127.0.0.1 This isn't a huge problem as everything is up and running again, but I would like to try to get to the bottom of it. I have a sneaking suspicion that an apple software update caused this issue, as I don't recall doing anything else that would have had any effect. Other than my hosts file (which looks normal), what else could be causing this? Running OSX 10.6.4

    Read the article

  • "System.Data.OracleClient requires Oracle client software version 8.1.7 or greater." Error Message

    - by Jandost Khoso
    Quick resolution: Give full permission to AUTHENTICATED USERS in following folders. a) ORACLE_HOME b) Program Files\ORACLE   Check your PATH. You might have installed different clients in your system and your .NET application is pointing to a home with inappoperiate client. What your .NET application should load is OCI.DLL with File version more than 8.1.7. According to the MSDN document Oracle and ADO.NET:   "The .NET Framework Data Provider for Oracle provides access to an Oracle database using the Oracle Call Interface (OCI) as provided by Oracle Client software. The functionality of the data provider is designed to be similar to that of the .NET Framework data providers for SQL Server, OLE DB, and ODBC. "     The MSDN document System Requirements (Oracle) says: "The .NET Framework Data Provider for Oracle requires Microsoft Data Access Components (MDAC) version 2.6 or later. MDAC 2.8 SP1 is recommended. You must also have Oracle 8i Release 3 (8.1.7) Client or later installed. "   Both the .NET Framework Data Provider for Oracle and Oracle Data Provider for .NET are data providers to access Oracle database. The former ships with .NET Framework and requires Oracle client version 8.1.7 or above. The latter is provided by Oracle company and requires Oracle client version 9.2 or later.     The Oracle Data Provider for .NET (ODP.NET) features optimized ADO.NET data access to the Oracle database. ODP.NET allows developers to take advantage of advanced Oracle database functionality, including Real Application Clusters, XML DB, and advanced security.   See the document Comparing the Microsoft .NET Framework 1.1 Data Provider for Oracle and the Oracle Data Provider for .NET for more information about the difference.

    Read the article

  • Agile Data Book from O'Reilly Media

    - by Compudicted
    Originally posted on: http://geekswithblogs.net/Compudicted/archive/2013/07/01/153309.aspxAs part of my ongoing self-education and approaching of some free time, yeah, both is a must for every IT person and geek! I have carefully examined the latest trends in the Computersphere with whatever tools I had at my disposal (nothing really fancy was used) and came to a conclusion that for a database pro the *hottest* topic today is undoubtedly the #BigData and all the rapidly growing and spawning ecosystem around it. Having recently immersed myself into the NoSQL world (let me tell here right away NoSQL means Not Only SQL) one book really stood out of the crowd: Book site: http://shop.oreilly.com/product/0636920025054.doDespite being a new book I am sure it will end up on the tables of many Big Data Generalists.In a few dozen words, it is primarily for two reasons:1) The author understands that a  typical business today cannot wait for a Data Scientist for too long to deliver results demanding as usual a very quick turnaround on investments (ROI), and 2) The book covers all the needed and proven modern brick and mortar offerings to get the job done by a relatively newcomer to the Big Data World.It certainly enables such a professional to grow and expand based on the acquired knowledge, and one can truly do it very fast.

    Read the article

  • SSIS Catalog: How to use environment in every type of package execution

    - by Kevin Shyr
    Here is a good blog on how to create a SSIS Catalog and setting up environments.  http://sqlblog.com/blogs/jamie_thomson/archive/2010/11/13/ssis-server-catalogs-environments-environment-variables-in-ssis-in-denali.aspx Here I will summarize 3 ways I know so far to execute a package while using variables set up in SSIS Catalog environment. First way, we have SSIS project having reference to environment, and having one of the project parameter using a value set up in the environment called "Development".  With this set up, you are limited to calling the packages by right-clicking on the packages in the SSIS catalog list and select Execute, but you are free to choose absolute or relative path of the environment. The following screenshot shows the 2 available paths to your SSIS environments.  Personally, I use absolute path because of Option 3, just to keep everything simple for myself. The second option is to call through SQL Job.  This does require you to configure your project to already reference an environment and use its variable.  When a job step is set up, the configuration part will require you to select that reference again.  This is more useful when you want to automate the same package that needs to be run in different environments. The third option is the most important to me as I have a SSIS framework that calls hundreds of packages.  The main part of the stored procedure is in this post (http://geekswithblogs.net/LifeLongTechie/archive/2012/11/14/time-to-stop-using-ldquoexecute-package-taskrdquondash-a-way-to.aspx).  But the top part had to be modified to include the logic to use environment reference. CREATE PROCEDURE [AUDIT].[LaunchPackageExecutionInSSISCatalog] @PackageName NVARCHAR(255) , @ProjectFolder NVARCHAR(255) , @ProjectName NVARCHAR(255) , @AuditKey INT , @DisableNotification BIT , @PackageExecutionLogID INT , @EnvironmentName NVARCHAR(128) = NULL , @Use32BitRunTime BIT = FALSE AS BEGIN TRY DECLARE @execution_id BIGINT = 0; -- Create a package execution IF @EnvironmentName IS NULL BEGIN   EXEC [SSISDB].[catalog].[create_execution]     @package_name=@PackageName,     @execution_id=@execution_id OUTPUT,     @folder_name=@ProjectFolder,     @project_name=@ProjectName,     @use32bitruntime=@Use32BitRunTime; END ELSE BEGIN   DECLARE @EnvironmentID AS INT   SELECT @EnvironmentID = [reference_id]    FROM SSISDB.[internal].[environment_references] WITH(NOLOCK)    WHERE [environment_name] = @EnvironmentName     AND [environment_folder_name] = @ProjectFolder      EXEC [SSISDB].[catalog].[create_execution]     @package_name=@PackageName,     @execution_id=@execution_id OUTPUT,     @folder_name=@ProjectFolder,     @project_name=@ProjectName,     @reference_id=@EnvironmentID,     @use32bitruntime=@Use32BitRunTime; END

    Read the article

  • Getting \r\n newlines to display properly in Mantis database

    - by matt_tm
    I've exported a Mantis project from one server to another and despite the MySQL SQL file (from which it was populated) showing: (15375,'\r\n1. Log out\r\n\r\n2. When logging in, start ... The final end-user view loses the \r\n and shows it only on one line: 1. Log out 2. When logging in, start typing When viewing through phpMyAdmin, I can see the record properly: 1. Log out 2. When logging in, start typing How can I correct this behavior when displaying this data?

    Read the article

  • One EC2 source with distributed varnish machines

    - by Elad Lachmi
    I have a web site hosted in an EC2 instance (2008 r2 + iis7.5 + sql server). I put one linux box running RHEL with varnish. After some configuration trail and error, I found a configuration that works. Now I want to duplicate the varnish boxes to other availability zones, but continue to pull the pages from the original windows box. It is my understanding that I can put the varnish boxes in different zones and pull from the windows box via it's external IP. But what do I need to do in order for each user to receive content from the box physically closest to them? Is this even possible? Thank you!

    Read the article

  • Announcing Oracle Receivables Generic Data Fix (GDF) for Refunds

    - by user793553
    Here's the first of what will be a series of Generic Data Fixes (GDF) to be released by Receivables Development. Generic Data Fix (GDF) are created by development to fix data issues caused by bugs/issues in the application code.  Other Generic Data Fix benefits/features include: Developed for bugs that can cause data issues. Provides a SELECT script that uses an identification/signature query to identify and report all data affected by issue/condition caused by a bug. Allow customers to view and modify what will be fixed. Provides a separate FIX script to fix the data reported by the SELECT script. The FIX script creates backup tables for the data that is fixed/updated. Available on My Oracle Support for download In Release 12, when creating a refund by either of the following methods: Applied a receipt to the Refund activity - which creates an Invoice in Payables Or you went directly into Payables to create a refund for an open Credit Memo in Receivables The Invoice in Payables that is associated to the refund is cancelled, the corresponding refund application or credit memo in Receivables is not properly re-instated. For the receipt application, it still remains applied to the Refund whereas this should be automatically unapplied. For the credit memo, it stays closed instead of getting re-opened. Doc ID 761993.1 includes the patch to make sure this doesn’t happen in the future as well as a GDF script to fix the current data (Script name: ar_std_refund_unapp.sql).  Download the script and run in READ_ONLY_MODE to identify 'refund' applications with this problem. Stay tuned for more GDF scripts coming soon...

    Read the article

  • Firefox completes the address bar with content absent from my history

    - by Antoine
    I have set Firefox to complete the address bar with elements from the history only ( other options are: nothing, bookmarks, and a history+bookmarks). However, Firefox still continues to complete the address bar with elements that are no longer in my history. A search in the history returns 0 result for the incriminated string. How can I solve this without loosing my entire history? I have already tried shift+delete on the elements I would like to delete, without success. How can I find the source of a certain completion ? (like an SQL request in the sqlite3 files used to store history) I'm using Firefox 16.0.2 on OS X 10.8.2.

    Read the article

  • Book Review: Professional ASP.Net MVC4

    - by Sam Abraham
    The past few weeks have been particularly busy as I continue to dedicate a bigger portion of my free time to refreshing my memory and enhancing my knowledge of best practices pertaining to technologies we plan on using for a major upcoming project. In this blog post, I will be providing a brief overview of my latest reading “Professional ASP.Net MVC4” by Jon Galloway, Phil Haack, Brad Wilson and K. Scott Allen. This book is a must read for web developers looking to enhance their MVC expertise with best practices and tips shared from recognized industry experts. This book takes the reader on a 16-chapter long journey towards being a better ASP.NET MVC developer with chapter 16 putting all information covered in practical context by dissecting the implementation of Nuget.org, a real-life open-source, ASP.NET MVC project.  All code samples referenced in this book are conveniently accessible via NuGet, a free, open-source Library package manager that installs as a Visual Studio Extension. Chapters 2, 3 and 4 thoroughly cover MVC’s various components: Controllers “C”, Views “V” and Models “M” respectively. Chapter 5 covers additional extension methods (Helpers) provided to speed and ease the use of common HTML elements such as forms, textboxes, grids, to name a few… Chapter 6 tackles built-in validation while providing examples and use cases on implementing custom validation that plugs into the MVC framework. Chapters 7 thru 13 discusses the latest on Membership, Ajax, Routing, NuGet and the ASP.Net Web API. Chapters 12 (Dependency Injection) and 13 (Unit Testing) demonstrate a big competitive advantage of MVC with its ease of test-ability and plug-ability. Chapters 14 and 15 targets the advanced developer showcasing how to extend MVC to customize and replace every piece in the framework.In conclusion, I strongly recommend Professional ASP.NET MVC 4 as an excellent read for both developers already using MVC as well as those getting started with the framework.   Many thanks to the Wiley/Wrox User Group Program for their support of our West Palm Beach Developers’ Group.  You can access my reviews of books I recently read: Professional ASP.NET Design Patterns Professional WCF 4.0 Inside Windows Communication Foundation Inside Microsoft SQL Server 2008 series

    Read the article

  • Would You Swim Laps in Lake Baikal?

    - by rickramsey
    source This is the lake where Yuli Vasiliev's countrymen swim laps. Yuli is one of my favorite OTN writers not just because he really knows his stuff. Not just because his writing is clear and accurate. And not just because his English is better than the English of most native speakers. Yo, those are all good reasons. But it's the Lake Baikal thing. Yuli recently wrote two wicked good how-to's about Oracle VM Templates. You should read them. You might gain a gram of Yuli's respect. Two grams, if you can head butt icebergs while you swim. How to Use Oracle VM Templates How to prepare an Oracle VM environment to use Oracle VM Templates, how to obtain a template, and how to deploy the template to your Oracle VM environment. Also how to create a virtual machine based on that template and how you can clone the template and change the clone's configuration. How to Use Oracle VM VirtualBox Templates How to use Oracle VM VirtualBox Templates in Oracle VM VirtualBox. Similar to the article above, but it describes how to download, install, and configure the templates within Oracle VM VirtualBox, instead of on bare metal. Other OTN Technical Articles by Yuli Vasiliev Retrieving, Transforming, and Consolidating Web Data with Oracle Database Setting Up, Configuring, and Using a WebLogic Server Cluster Cube Development for Beginners How to XQuery Non-JDBC Sources from JDBC Advanced Dimensional Design with Oracle Warehouse Builder Using the JDBC Connectivity Layer in Oracle Warehouse Builder High Performance Oracle JDBC Programming Python Data Persistence with Oracle Querying JPA Entities with JPQL and Native SQL - Rick Website Newsletter Facebook Twitter

    Read the article

  • How to Setting up Amazon EC2 with own OS and DB?

    - by SLim
    i got my own version of OS and DB which are window server 2008 Hyper-V R2 and Sql server R2 2008 both in enterprise version may i know how to configure it up and running ? with amazon EC2, what other is a must combination to make it run ? also how could i install the operating system and DNS ? i never doing server before, but i just need something like VPS to support my development and testing. Amazon Ec2 seem the best and cheapest service due to only $1 per hour.

    Read the article

  • Non-blocking ORM issues

    - by Nikolay Fominyh
    Once I had question on SO, and found that there are no non-blocking ORMs for my favorite framework. I mean ORM with callback support for asynchronous retrieval. The ORM would be supplied with a callback or some such to "activate" when data has been received. Otherwise ORM needs to be split of in a separate thread to guarantee UI responsiveness. I want to create one, but I have some questions that blocking me from starting development: What issues we can meet when developing ORM? Does word "non-blocking" before word "ORM" will dramatically increase complexity of ORM? Why there are not much non-blocking ORMs around? Update: It looks, that I have to improve my question. We have solutions that already allows us to receive data in non-blocking way. And I believe that not all companies that use such solutions - using raw SQL. We want to create more generic solution, that we can reuse in future projects. What difficulties we can meet?

    Read the article

  • Where to put business logic in MVC design?

    - by BriskLabs Pakistan
    I have created a simple MVC java application that adds records through data forms to a database. my app collects data, it also validates it and stores it. This is because the data is being sourced online from different users. the data is mostly numeric in nature. now on the numeric data being stored into database (SQL server) , i wish that my app should be able to perform computations... and display it. the user is not interested in how computations are done so they must be encapsulated. the user must only be able to view the simple computed data which for example A column data - B Column data / C column data etc... and just display it to the user... i know how to write stored procedures for same but i want a 3 tier app I want the data, that I put into the database as a record, worked upon by performing calculations on it. However, the original data should remain unaffected, while the new data, post-calculations, must be stored as a new entity record into the database. Where should I write the code for this background calculation? As it is the rules and business logic... in a new java beans files ?

    Read the article

  • How to make schema dumps comparable between Windows and Linux

    - by Jonathan
    I have two systems running, one on linux and the other on windows. From the linux box, I ran pg_dump against both systems and dumped the schema. pg_dump command: pg_dump -h HOST -U USER -s -f /tmp/out.sql DB_NAME After I removed all of the "--" comments, I diffed the files together. Diff output snippet, linux compared to windows: - ADD CONSTRAINT sys_c004775 FOREIGN KEY (ruleid) REFERENCES rule(ruleid); + ADD CONSTRAINT sys_c004775 FOREIGN KEY (ruleid) REFERENCES "rule"(ruleid); The linux dump does not quote any entities and windows does. Is this a function of some encoding or just of a difference between windows and linux? Is there an option in pg_dump to make the output more consistent?

    Read the article

  • Choosing open source vs. proprietary CMS

    - by jkneip
    Hi- I've been tasked with redesigning a website for a small academic library. While only in charge of the site for 6 months, we've been maintaining static html pages edited in Dreamweaver for years. Last count of our total pages is around 400. Our university is going with an enterprise level solution called Sitefinity, although we maintain our own domain and are responsible to maintain our own presense. Some background-my library has a couple Microsoft IIS servers on which this static html site has been running. I'm advocating for the implementation of a CMS while doing this redesign. The problem is I'm basically the lone webmaster so I have no one to agree or disagree with my choice. There are also only 1-2 content editors right know for the site but a CMS could change that factor. I would like to use the functionality of having servers that run .NET and MS SQL but am more experience setting up and maintaining open source software like Wordpress or Drupal on web hosts. My main concern is choosing a CMS that will be easy to update / maintain / deal with upgrades (i.e., support) in case I'm not there in the future. So I'm wondering how to factor in the open source CMS vs. a relatively inexpensive commercial CMS decision and whether choosing PHP/MySQL vs. ASP.net framework for development environment will play into my decision. Thanks for any input that can be offered based on the details I've given. Thanks, Jason

    Read the article

< Previous Page | 978 979 980 981 982 983 984 985 986 987 988 989  | Next Page >