Search Results

Search found 63752 results on 2551 pages for 'create table'.

Page 504/2551 | < Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >

  • How to connect two query results?

    - by nijansen
    I want to retrieve all ids within a certain timespan. The timestamps however, are stored in a different table: Table A has column my_id Table B has columns my_id, timestamp I would want something like SELECT id, time FROM (SELECT my_id AS id FROM A) q1, (SELECT timestamp AS time FROM B WHERE my_id = id) q2 ; But how can I get the value of id within a different subquery? Is there an elegant solution for this problem?

    Read the article

  • Issue in alternate Row color using each() method of JQuery

    - by user1323981
    I have a table as under <table > <tr> <th scope="col">EmpId</th><th scope="col">EmpName</th> </tr> <tr> <td>1</td><td>ABC</td> </tr> <tr> <td>2</td><td>DEF</td> </tr> </table> I want to set the alternate row color of only the "td" elements of the table and not "th" by using only each() function. I have tried with <style type="text/css"> tr.even { background-color: green; } tr.odd { background-color: yellow; } </style> $(document).ready(function () { $('table > tbody').each(function () { $('tr:odd', this).addClass('odd').removeClass('even'); $('tr:even', this).addClass('even').removeClass('odd'); }); }); Though this works but it accepts also "th" element. How to avoid that? Please help Thanks

    Read the article

  • SSIS Data Transformation

    - by bbbbb
    Hi, I am new to SSIS, so please bare with me. I am trying to transfer data from one db to a new one. i am fetching data from one table say i fetch name of person, then i insert this into say Table Person. this will generate a personID which i want to insert into say Address Table. What should be the approach using SSIS. Any suggestions.

    Read the article

  • Can't seem to get my like/dislike to work in PHP

    - by user300371
    My table is comment_likedislike. It has the comment_counterid, comment_counter, comment_id(which is from another table) fields. And I have an url (LIKE) that when clicked would link to this code and get the comment_id and like_id. I want to do a count where if it is the first 'like', it would store a new comment_counter in the comment_likedislike table. But if there is already a 'like' for the comment in the table, it would just update the comment_counter to +1. Problem: When I run this code, it doesn't UPDATE(1st statement) but INSERT(2nd if statement) no matter if there is a like for the comment or not. I don't think the code is checking if the comment_id is in the table already. I am a novice php programmer. Thanks! if (isset($_GET['comment_id']) && isset($_GET['like_id'])) { $query5="SELECT * FROM comment_likedislike "; $data5=mysqli_query ($dbc, $query5); while ($row5= mysqli_fetch_array($data5)){ $comment_id2=$row5['comment_id']; } if ($comment_id2 == $_GET['comment_id']){ $counter=$row5['comment_counter']; $counter++; $query= "UPDATE comment_likedislike SET comment_counter ='$counter' WHERE comment_id= '".$_GET['comment_id']."' "; mysqli_query($dbc, $query); } if ($comment_id2 != $_GET['comment_id']) { $counter2=1; $query9 = "INSERT INTO comment_likedislike (comment_counter, comment_id) VALUES ('$counter2', '".$_GET['comment_id']."' )"; mysqli_query($dbc, $query9); } }

    Read the article

  • SQL to get rows (not groups) that match an aggregate

    - by xulochavez
    Given table USER (name, city, age), what's the best way to get the user details of oldest user per city? I have seen the following example SQL used in Oracle which I think it works select name, city, age from USER, (select city as maxCity, max(age) as maxAge from USER group by city) where city=maxCity and age=maxAge So in essence: use a nested query to select the grouping key and aggregate for it, then use it as another table in the main query and join with the grouping key and the aggregate value for each key. Is this the standard SQL way of doing it? Is it any quicker than using a temporary table, or is in fact using a temporary table interanlly anyway?

    Read the article

  • Best way to update record X when Y is inserted

    - by Saif Bechan
    I have a huge table that is mainly used for backup and administrative purposes. The only records that matters is the last inserted record. On every hit to order by time inserted is just too slow. I want keep a separate table with the last inserted id. In PHP I now insert, get last inserted id, and update the other table. Is there a more efficient way to do this.

    Read the article

  • How should I solve this MySql problem (PHP) ? (Beginner)

    - by Camran
    I have several tables in a MySql database. I have a classifieds website, and at the bottom I display the users last visited classifieds. I do this by storing the ID:s of the ads to an array in the cookie. Now, my db is made up like this kindof: Main Table: // Stores global information, ie these fields have to be filled out in every record, never be blank ID Price category Seller Item Table: // Stores descriptive info about whats for sale ID AD_ID (FK) //This is the same as ID in the MAIN TABLE Color Size Mileage etc My problem is that I need to know what category the ad is in, in order to query mysql for the right information I think. So I need two variables, but the cookie only has one (ID) stored. Offcourse I could make two queries, first one just matching the ID to the main_table and fetch the category from the Main_table. Then make the second query and fetch all other info from the right table. Here is an example if the category was Vehicles: SELECT * FROM main_table, vehicles_table, WHERE main_table.id=$id_from_cookie AND main_table.ad_id=vehicles_table.ad_id As you can see above, I need the category to write in what table to check, right? But I think there must be a smarter way, like fetching them in one single query using only one variable (id from cookie)? How should I do this? Understand? Let me know if you need more input... Thanks

    Read the article

  • Modifying SQL XML ?olumn

    - by Chinjoo
    I have an XML column in one of my table. For example I have an Employee table with following fields: Name (varhcar) | Address (XML) The Address field is having values like <Address> <Street></Street> <City></City> </Address> I have some n number of rows already in the table. Now I want to insert a new node - Country to all the rows in tha table. With default: <Country>IND</Country>. How can I write the query for this. I want all the existing data to be as it is with adding the country node to all the Address column XML.

    Read the article

  • SQL Alter: add multiple FKs?

    - by acidzombie24
    From here ALTER TABLE ORDERS ADD FOREIGN KEY (customer_sid) REFERENCES CUSTOMER(SID); How do i add several keys with SQL Server? is it something like the below? (I cant test ATM and unfortunately i have no way to test queries unless i run it through code) ALTER TABLE ORDERS ADD FOREIGN KEY (customer_sid) REFERENCES CUSTOMER(SID), ADD FOREIGN KEY (customer_sid2) REFERENCES CUSTOMER(SID2); or is it like ALTER TABLE ORDERS ADD FOREIGN KEY (customer_sid, customer_sid2) REFERENCES CUSTOMER(SID, SID2)

    Read the article

  • Write a JavaScript that accepts a number from the user using the “prompt” function

    - by A sw A
    Write a JavaScript that accepts a number from the user using the “prompt” function. Then it draws a table in the HTML document that has the user specified number of rows and columns. In each table data, it displays the result of the math operation (row raised to the power of column). For example, if the user enters the number 3, the JavaScript should draw the following table: 1 1 1 2 4 8 3 9 27

    Read the article

  • What is the best practise for relational database tables in mysql?

    - by George
    Hi, I know, there is a lot of info on mysql out there. But I was not really able to find an answer to this specific and actually simple question: Let's say I have two tables: USERS (with many fields, e.g. name, street, email, etc.) and GROUPS (also with many fields) The relation is (I guess?) 1:n, that is ONE user can be a member of MANY groups. What I dis, is create another table, named USERS_GROUPS_REL. This table has only two fields: us_id (unique key of table USERS) and gr_id (unique key of table GROUPS) In PHP I do a query with join. Is this "best practice" or is there a better way? Thankful for any hint!

    Read the article

  • SQL query for selecting most recent entries

    - by Mr_Skid_Marks
    A table in my database has a column, DATE_ADDED (stored in seconds). I want to extract all rows with the most recent date (aka largest value for DATE_ADDED). The only solution I have come up with is to SELECT all the rows in ASC (ascending) order, grab the last entry from the table, check the date on this, and perform another SELECT on the table but this time only for the discovered DATE_ADDED. Is it possibly to simplify this series of queries into a single one? My thought is I should be able to do a SELECT on all of the largest values in the table, but I am struggling to come up with a proper query.

    Read the article

  • How to retrieve from two tables with same foriegn key repeated more than once?

    - by Sarenya
    How to display the datas of tables, that are linked by a primay key and foriegn key, where the foriegn key of the data repeats? For ex. I have two tables, ParentTable and Childtable. The primarykey of the Parent table acts as the foriegn key of the Child table. There are more than one record with same ParentId in Child table. How to retrieve them and display in a single Grid or List or any type of view?

    Read the article

  • weird index behavior

    - by TasostheGreat
    I have set up my table with an index only on done_status(done_status =INT), when I use EXPLAIN SELECT * FROM reminder WHERE done_status=2 i get this back id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE reminder ALL done_status NULL NULL NULL 5 Using where but when I give this command EXPLAIN SELECT * FROM reminder WHERE done_status=1 that's what I get back: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE reminder ref done_status done_status 4 const 2 first time it shows me it uses 5 rows second time 2 rows I don't think the index works, if I understood it right first time it should give me 3 rows. What do I do wrong? SHOW INDEX FROM reminder: Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment reminder 1 done_status 1 done_status A 5 NULL NULL BTREE

    Read the article

  • ??GoldenGate Replicat?HANDLECOLLISIONS??

    - by Liu Maclean(???)
    HANDLECOLLISIONS?????goldengate????????REPLICAT??,???????????????????,???????????????????????????,??????????????????????????reperror????????discard??,????????????????,??????(????error mapping????,???????discard??),??????????????;?????????????????,????????? ??HANDLECOLLISIONS?????: target??delete??(missing delete),??????????discardfile target??update??(missing update) ????????=» update???INSERT ,???????????? ?????????=» ??????????discardfile ????????????target??,???replicat???UPDATE?????????????? ??1 target??delete??(missing delete) : C:\Users\ML>sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Tue Sep 18 13:38:03 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> conn sender/oracle Connected. SQL> create table handlec(t1 int primary key,t2 int); Table created. SQL> insert into handlec values(1,2); 1 row created. SQL> insert into handlec values(3,2); 1 row created. SQL> insert into handlec values(4,2); 1 row created. SQL> commit; Commit complete. SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 3 2 4 2 target : SQL> conn receiver/oracle Connected. SQL> create table handlec(t1 int primary key,t2 int); Table created. SQL> insert into handlec values(1,2); 1 row created. SQL> commit; SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 SQL> GGSCI (XIANGBLI-CN) 1> alter extract load2 , begin now EXTRACT altered. GGSCI (XIANGBLI-CN) 4> alter replicat rep2, begin now REPLICAT altered. GGSCI (XIANGBLI-CN) 13> add trandata sender.* Logging of supplemental redo data enabled for table SENDER.HANDLEC. Logging of supplemental redo log data is already enabled for table SENDER.TV. GGSCI (XIANGBLI-CN) 14> start mgr MGR is already running. GGSCI (XIANGBLI-CN) 15> start er * Sending START request to MANAGER ... EXTRACT LOAD2 starting Sending START request to MANAGER ... REPLICAT REP2 starting GGSCI (XIANGBLI-CN) 16> info all Program Status Group Lag at Chkpt Time Since Chkpt MANAGER RUNNING EXTRACT RUNNING LOAD2 00:00:00 00:00:01 REPLICAT RUNNING REP2 00:00:00 00:00:08 ***SOURCE?????TARGET????? SQL> delete handlec where t1=3; 1 row deleted. SQL> commit; Commit complete. ??SQL error 1403??,REPLICAT ABORT 2012-09-18 13:45:48 WARNING OGG-01004 Aborted grouped transaction on 'RECEIVER.HANDLEC', Database error 1403 (OCI Error ORA-01403: no data found, SQL ). 2012-09-18 13:45:48 WARNING OGG-01003 Repositioning to rba 1091 in seqno 3. 2012-09-18 13:45:48 WARNING OGG-01154 SQL error 1403 mapping SENDER.HANDLEC to RECEIVER.HANDLEC OCI Error ORA-01403: no data found, SQL . 2012-09-18 13:45:48 WARNING OGG-01003 Repositioning to rba 1091 in seqno 3. Source Context : SourceModule : [er.errors] SourceID : [er/errors.cpp] SourceFunction : [take_rep_err_action] SourceLine : [623] ThreadBacktrace : [8] elements : [D:\ogg\V34342-01\gglog.dll(??1CContextItem@@UEAA@XZ+0x3272) [0x000000018010BDD2]] : [D:\ogg\V34342-01\gglog.dll(?_MSG_ERR_MAP_TO_TANDEM_FAILED@@YAPEAVCMessage@@PEAVCSourceContext@@AEBV?$CQualDBObjName@$00@ggapp@gglib@ggs@@1W4MessageDisposition@CMessageFactory@@@Z+0x138) [0x00000001800AD508]] : [D:\ogg\V34342-01\replicat.exe(ERCALLBACK+0x6e1e) [0x0000000140099D5E]] : [D:\ogg\V34342-01\replicat.exe(shutdownMonitoring+0x4411) [0x00000001400C9BE1]] : [D:\ogg\V34342-01\replicat.exe(shutdownMonitoring+0x289cd) [0x00000001400EE19D]] : [D:\ogg\V34342-01\replicat.exe(CommonLexerNewSSD+0x9440) [0x00000001402AE980]] : [C:\windows\system32\kernel32.dll(BaseThreadInitThunk+0xd) [0x000000007733652D]] : [C:\windows\SYSTEM32\ntdll.dll(RtlUserThreadStart+0x21) [0x000000007746C521]] 2012-09-18 13:45:48 ERROR OGG-01296 Error mapping from SENDER.HANDLEC to RECEIVER.HANDLEC. *********************************************************************** * ** Run Time Statistics ** * *********************************************************************** Last record for the last committed transaction is the following: ___________________________________________________________________ Trail name : D:\ogg\V34342-01\ex\ze000003 Hdr-Ind : E (x45) Partition : . (x04) UndoFlag : . (x00) BeforeAfter: B (x42) RecLength : 9 (x0009) IO Time : 2012-09-18 13:45:38.000000 IOType : 3 (x03) OrigNode : 255 (xff) TransInd : . (x03) FormatType : R (x52) SyskeyLen : 0 (x00) Incomplete : . (x00) AuditRBA : 44 AuditPos : 3337232 Continued : N (x00) RecCount : 1 (x01) 2012-09-18 13:45:38.000000 Delete Len 9 RBA 1091 Name: SENDER.HANDLEC ___________________________________________________________________ Reading D:\ogg\V34342-01\ex\ze000003, current RBA 1091, 0 records Report at 2012-09-18 13:45:48 (activity since 2012-09-18 13:45:48) From Table SENDER.HANDLEC to RECEIVER.HANDLEC: # inserts: 0 # updates: 0 # deletes: 0 # discards: 1 Last log location read: FILE: D:\ogg\V34342-01\ex\ze000003 SEQNO: 3 RBA: 1091 TIMESTAMP: 2012-09-18 13:45:38.000000 EOF: NO READERR: 0 2012-09-18 13:45:48 ERROR OGG-01668 PROCESS ABENDING. 2012-09-18 13:45:48 INFO OGG-01237 Trace file D:\ogg\V34342-01\REP_TRACE1.TRC closed. 2012-09-18 13:45:48 INFO OGG-01237 Trace file D:\ogg\V34342-01\REP_TRACE2.TRC closed. CACHE OBJECT MANAGER statistics CACHE MANAGER VM USAGE vm current = 0 vm anon queues = 0 vm anon in use = 0 vm file = 0 vm used max = 0 ==> CACHE BALANCED CACHE CONFIGURATION cache size = 2G cache force paging = 3.41G buffer min = 64K buffer highwater = 8M pageout eligible size = 8M ================================================================================ ??skiptransaction???????? GGSCI (XIANGBLI-CN) 18> start rep2 skiptransaction Sending START request to MANAGER ... REPLICAT REP2 starting ??2 target??update??(missing update),???????? : ???????, ??source????????? SQL> update handlec set t1=5 where t1=4; 1 row updated. SQL> commit; Commit complete. ???target ????(miss update)??????? Database error 1403+OGG-01296 2012-09-18 13:49:30 WARNING OGG-01004 Aborted grouped transaction on 'RECEIVER.HANDLEC', Database error 1403 (OCI Error ORA-01403: no data found, SQL <UPDATE "RECEIVER"."HANDLEC" SET "T1" = :a1 WHERE "T1" = :b0>). 2012-09-18 13:49:30 WARNING OGG-01003 Repositioning to rba 1218 in seqno 3. 2012-09-18 13:49:30 WARNING OGG-01003 Repositioning to rba 1218 in seqno 3. Source Context : SourceModule : [er.errors] SourceID : [er/errors.cpp] SourceFunction : [take_rep_err_action] SourceLine : [623] ThreadBacktrace : [8] elements : [D:\ogg\V34342-01\gglog.dll(??1CContextItem@@UEAA@XZ+0x3272) [0x000000018010BDD2]] : [D:\ogg\V34342-01\gglog.dll(?_MSG_ERR_MAP_TO_TANDEM_FAILED@@YAPEAVCMessage@@PEAVCSourceContext@@AEBV?$CQualDBObjName@$00@ggapp@gglib@ggs@@1W4MessageDisposition@CMessageFactory@@@Z+0x138) [0x00000001800AD508]] : [D:\ogg\V34342-01\replicat.exe(ERCALLBACK+0x6e1e) [0x0000000140099D5E]] : [D:\ogg\V34342-01\replicat.exe(shutdownMonitoring+0x4411) [0x00000001400C9BE1]] : [D:\ogg\V34342-01\replicat.exe(shutdownMonitoring+0x289cd) [0x00000001400EE19D]] : [D:\ogg\V34342-01\replicat.exe(CommonLexerNewSSD+0x9440) [0x00000001402AE980]] : [C:\windows\system32\kernel32.dll(BaseThreadInitThunk+0xd) [0x000000007733652D]] : [C:\windows\SYSTEM32\ntdll.dll(RtlUserThreadStart+0x21) [0x000000007746C521]] 2012-09-18 13:49:30 ERROR OGG-01296 Error mapping from SENDER.HANDLEC to RECEIVER.HANDLEC. ??HANDLECOLLISIONS?,rep??????????discard?? GGSCI (XIANGBLI-CN) 23> view params rep2 replicat rep2 userid receiver , password oracle trace ./rep_trace1.trc trace2 ./rep_trace2.trc ASSUMETARGETDEFS HANDLECOLLISIONS map sender.*, target receiver.*; GGSCI (XIANGBLI-CN) 18> start rep2 SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 5 ????T1=5 T2 NULL?????? ,??update?????????????,??replicat??????????????update????????????????,?????T2 ?NULL ,????????????EXTRACT??PKUPDATE??? ????????FETCHOPTIONS FETCHPKUPDATECOLS ????????EXTRACT?????,???EXTRACT? ????extract???????????? ??????: SQL> conn receiver/oracle Connected. SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 10 100 5 20 200 SQL> delete handlec where t1=5; 1 row deleted. SQL> commit; Commit complete. SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 10 100 20 200 SQL> conn sender/oracle Connected. SQL> update handlec set t1=t1+1000 where t1=5; 1 row updated. SQL> commit; Commit complete. SQL> conn receiver/oracle Connected. SQL> SQL> SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 10 100 20 200 1005 2 ???????FETCHOPTIONS FETCHPKUPDATECOLS??????redo image???trail?,????primary key?????HANDLECOLLISIONS????target??????????? ??3 ????????????target??,???replicat???UPDATE??????????????: *** TARGET SQL> conn receiver/oracle Connected. SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 10 9 5 target????? t1=10 t2=9??? ,????source???(10,100)??? >>SOURCE SQL> insert into handlec values(10,100); 1 row created. SQL> commit; >>TARGET SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 10 100 5 ???????source?insert??,???target???????????????HANDLECOLLISIONS?REPLICAT???UPDATE??????COLUMNS ?? HANDLECOLLISIONS?????goldengate????????REPLICAT??,???????????????????,???????????????????????????,??????????????????????????reperror????????discard??,????????????????,??????,??????????????;?????????????????,????????? ??HANDLECOLLISIONS?????: target??delete??(missing delete),??????????discardfile target??update??(missing update) ????????=» update???INSERT ,???????????? ?????????=» ??????????discardfile ????????????target??,???replicat???UPDATE?????????????? ?:???????????Insert/Delete??,????????????????Replicat?????abend,????? ???????????,??target??HANDLECOLLISIONS??update??,?????INSERT??????,???????????????,FETCHOPTIONS FETCHPKUPDATECOLS??????redo image???trail?,????primary key?????HANDLECOLLISIONS????target??????????? ??????send ??????HANDLECOLLISIONS GGSCI (XIANGBLI-CN) 29> send rep2, NOHANDLECOLLISIONS Sending NOHANDLECOLLISIONS request to REPLICAT REP2 ... REP2 NOHANDLECOLLISIONS set for 1 tables and 0 wildcard entries

    Read the article

  • ASP.NET MVC validation problem

    - by ile
    ArticleRepostitory.cs: using System; using System.Collections.Generic; using System.Linq; using System.Web; using CMS.Model; using System.Web.Mvc; namespace CMS.Models { public class ArticleDisplay { public ArticleDisplay() { } public int CategoryID { set; get; } public string CategoryTitle { set; get; } public int ArticleID { set; get; } public string ArticleTitle { set; get; } public DateTime ArticleDate; public string ArticleContent { set; get; } } public class ArticleRepository { private DB db = new DB(); // // Query Methods public IQueryable<ArticleDisplay> FindAllArticles() { var result = from category in db.ArticleCategories join article in db.Articles on category.CategoryID equals article.CategoryID select new ArticleDisplay { CategoryID = category.CategoryID, CategoryTitle = category.Title, ArticleID = article.ArticleID, ArticleTitle = article.Title, ArticleDate = article.Date, ArticleContent = article.Content }; return result; } public IQueryable<ArticleDisplay> FindTodayArticles() { var result = from category in db.ArticleCategories join article in db.Articles on category.CategoryID equals article.CategoryID where article.Date == DateTime.Today select new ArticleDisplay { CategoryID = category.CategoryID, CategoryTitle = category.Title, ArticleID = article.ArticleID, ArticleTitle = article.Title, ArticleDate = article.Date, ArticleContent = article.Content }; return result; } public Article GetArticle(int id) { return db.Articles.SingleOrDefault(d => d.ArticleID == id); } public IQueryable<ArticleDisplay> DetailsArticle(int id) { var result = from category in db.ArticleCategories join article in db.Articles on category.CategoryID equals article.CategoryID where id == article.ArticleID select new ArticleDisplay { CategoryID = category.CategoryID, CategoryTitle = category.Title, ArticleID = article.ArticleID, ArticleTitle = article.Title, ArticleDate = article.Date, ArticleContent = article.Content }; return result; } // // Insert/Delete Methods public void Add(Article article) { db.Articles.InsertOnSubmit(article); } public void Delete(Article article) { db.Articles.DeleteOnSubmit(article); } // // Persistence public void Save() { db.SubmitChanges(); } } } ArticleController.cs: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; using System.Web.Mvc.Ajax; using CMS.Models; using CMS.Model; namespace CMS.Controllers { public class ArticleController : Controller { ArticleRepository articleRepository = new ArticleRepository(); ArticleCategoryRepository articleCategoryRepository = new ArticleCategoryRepository(); // // GET: /Article/ public ActionResult Index() { var allArticles = articleRepository.FindAllArticles().ToList(); return View(allArticles); } // // GET: /Article/Details/5 public ActionResult Details(int id) { var article = articleRepository.DetailsArticle(id).Single(); if (article == null) return View("NotFound"); return View(article); } // // GET: /Article/Create public ActionResult Create() { ViewData["categories"] = new SelectList ( articleCategoryRepository.FindAllCategories().ToList(), "CategoryId", "Title" ); Article article = new Article() { Date = DateTime.Now, CategoryID = 1 }; return View(article); } // // POST: /Article/Create [AcceptVerbs(HttpVerbs.Post)] public ActionResult Create(Article article) { if (ModelState.IsValid) { try { // TODO: Add insert logic here articleRepository.Add(article); articleRepository.Save(); return RedirectToAction("Index"); } catch { return View(article); } } else { return View(article); } } // // GET: /Article/Edit/5 public ActionResult Edit(int id) { ViewData["categories"] = new SelectList ( articleCategoryRepository.FindAllCategories().ToList(), "CategoryId", "Title" ); var article = articleRepository.GetArticle(id); return View(article); } // // POST: /Article/Edit/5 [AcceptVerbs(HttpVerbs.Post)] public ActionResult Edit(int id, FormCollection collection) { Article article = articleRepository.GetArticle(id); try { // TODO: Add update logic here UpdateModel(article, collection.ToValueProvider()); articleRepository.Save(); return RedirectToAction("Details", new { id = article.ArticleID }); } catch { return View(article); } } // // HTTP GET: /Article/Delete/1 public ActionResult Delete(int id) { Article article = articleRepository.GetArticle(id); if (article == null) return View("NotFound"); else return View(article); } // // HTTP POST: /Article/Delete/1 [AcceptVerbs(HttpVerbs.Post)] public ActionResult Delete(int id, string confirmButton) { Article article = articleRepository.GetArticle(id); if (article == null) return View("NotFound"); articleRepository.Delete(article); articleRepository.Save(); return View("Deleted"); } } } View/Article/Create.aspx: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<CMS.Model.Article>" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Create </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2>Create</h2> <%= Html.ValidationSummary("Create was unsuccessful. Please correct the errors and try again.") %> <% using (Html.BeginForm()) {%> <fieldset> <legend>Fields</legend> <p> <label for="Title">Title:</label> <%= Html.TextBox("Title") %> <%= Html.ValidationMessage("Title", "*") %> </p> <p> <label for="Content">Content:</label> <%= Html.TextArea("Content", new { id = "Content" })%> <%= Html.ValidationMessage("Content", "*")%> </p> <p> <label for="Date">Date:</label> <%= Html.TextBox("Date") %> <%= Html.ValidationMessage("Date", "*") %> </p> <p> <label for="CategoryID">Category:</label> <%= Html.DropDownList("CategoryId", (IEnumerable<SelectListItem>)ViewData["categories"])%> </p> <p> <input type="submit" value="Create" /> </p> </fieldset> <% } %> <div> <%=Html.ActionLink("Back to List", "Index") %> </div> </asp:Content> If I remove DropDownList from .aspx file then validation (on date only because no other validation exists) works, but of course I can't create new article because one value is missing. If I leave dropdownlist and try to insert wrong date I get following error: System.InvalidOperationException: The ViewData item with the key 'CategoryId' is of type 'System.Int32' but needs to be of type 'IEnumerable'. If I enter correct date than the article is properly inserted. There's one other thing that's confusing me... For example, if I try manually add the categoyID: [AcceptVerbs(HttpVerbs.Post)] public ActionResult Create(Article article) { if (ModelState.IsValid) { try { // TODO: Add insert logic here // Manually add category value article.CategoryID = 1; articleRepository.Add(article); articleRepository.Save(); return RedirectToAction("Index"); } catch { return View(article); } } else { return View(article); } } ..I also get the above error. There's one other thing I noticed. If I add partial class Article, when returning to articleRepository.cs I get error that 'Article' is an ambiguous reference between 'CMS.Models.Article' and 'CMS.Model.Article' Any thoughts on this one?

    Read the article

  • MySQL Memory usage

    - by Rob Stevenson-Leggett
    Our MySQL server seems to be using a lot of memory. I've tried looking for slow queries and queries with no index and have halved the peak CPU usage and Apache memory usage but the MySQL memory stays constantly at 2.2GB (~51% of available memory on the server). Here's the graph from Plesk. Running top in the SSH window shows the same figures. Does anyone have any ideas on why the memory usage is constant like this and not peaks and troughs with usage of the app? Here's the output of the MySQL Tuning Primer script: -- MYSQL PERFORMANCE TUNING PRIMER -- - By: Matthew Montgomery - MySQL Version 5.0.77-log x86_64 Uptime = 1 days 14 hrs 4 min 21 sec Avg. qps = 22 Total Questions = 3059456 Threads Connected = 13 Warning: Server has not been running for at least 48hrs. It may not be safe to use these recommendations To find out more information on how each of these runtime variables effects performance visit: http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html Visit http://www.mysql.com/products/enterprise/advisors.html for info about MySQL's Enterprise Monitoring and Advisory Service SLOW QUERIES The slow query log is enabled. Current long_query_time = 1 sec. You have 6 out of 3059477 that take longer than 1 sec. to complete Your long_query_time seems to be fine BINARY UPDATE LOG The binary update log is NOT enabled. You will not be able to do point in time recovery See http://dev.mysql.com/doc/refman/5.0/en/point-in-time-recovery.html WORKER THREADS Current thread_cache_size = 0 Current threads_cached = 0 Current threads_per_sec = 2 Historic threads_per_sec = 0 Threads created per/sec are overrunning threads cached You should raise thread_cache_size MAX CONNECTIONS Current max_connections = 100 Current threads_connected = 14 Historic max_used_connections = 20 The number of used connections is 20% of the configured maximum. Your max_connections variable seems to be fine. INNODB STATUS Current InnoDB index space = 6 M Current InnoDB data space = 18 M Current InnoDB buffer pool free = 0 % Current innodb_buffer_pool_size = 8 M Depending on how much space your innodb indexes take up it may be safe to increase this value to up to 2 / 3 of total system memory MEMORY USAGE Max Memory Ever Allocated : 2.07 G Configured Max Per-thread Buffers : 274 M Configured Max Global Buffers : 2.01 G Configured Max Memory Limit : 2.28 G Physical Memory : 3.84 G Max memory limit seem to be within acceptable norms KEY BUFFER Current MyISAM index space = 4 M Current key_buffer_size = 7 M Key cache miss rate is 1 : 40 Key buffer free ratio = 81 % Your key_buffer_size seems to be fine QUERY CACHE Query cache is supported but not enabled Perhaps you should set the query_cache_size SORT OPERATIONS Current sort_buffer_size = 2 M Current read_rnd_buffer_size = 256 K Sort buffer seems to be fine JOINS Current join_buffer_size = 132.00 K You have had 16 queries where a join could not use an index properly You should enable "log-queries-not-using-indexes" Then look for non indexed joins in the slow query log. If you are unable to optimize your queries you may want to increase your join_buffer_size to accommodate larger joins in one pass. Note! This script will still suggest raising the join_buffer_size when ANY joins not using indexes are found. OPEN FILES LIMIT Current open_files_limit = 1024 files The open_files_limit should typically be set to at least 2x-3x that of table_cache if you have heavy MyISAM usage. Your open_files_limit value seems to be fine TABLE CACHE Current table_cache value = 64 tables You have a total of 426 tables You have 64 open tables. Current table_cache hit rate is 1% , while 100% of your table cache is in use You should probably increase your table_cache TEMP TABLES Current max_heap_table_size = 16 M Current tmp_table_size = 32 M Of 15134 temp tables, 9% were created on disk Effective in-memory tmp_table_size is limited to max_heap_table_size. Created disk tmp tables ratio seems fine TABLE SCANS Current read_buffer_size = 128 K Current table scan ratio = 2915 : 1 read_buffer_size seems to be fine TABLE LOCKING Current Lock Wait ratio = 1 : 142213 Your table locking seems to be fine The app is a facebook game with about 50-100 concurrent users. Thanks, Rob

    Read the article

  • Adding Extra Hard Drives Debian Fdisk

    - by Belgin Fish
    well I just got a new server and it's a little different than what I'm use to, when I run cfdisk I get WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 3000.6 GB, 3000592982016 bytes 255 heads, 63 sectors/track, 364801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sda1 1 267350 2147483647+ ee GPT Partition 1 does not start on physical sector boundary. WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sdb: 3000.6 GB, 3000592982016 bytes 255 heads, 63 sectors/track, 364801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdb1 1 267350 2147483647+ ee GPT Partition 1 does not start on physical sector boundary. WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sdc: 3000.6 GB, 3000592982016 bytes 255 heads, 63 sectors/track, 364801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 1 267350 2147483647+ ee GPT Partition 1 does not start on physical sector boundary. WARNING: GPT (GUID Partition Table) detected on '/dev/sdd'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sdd: 3000.6 GB, 3000592982016 bytes 255 heads, 63 sectors/track, 364801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdd1 1 267350 2147483647+ ee GPT Partition 1 does not start on physical sector boundary. WARNING: GPT (GUID Partition Table) detected on '/dev/sdf'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sdf: 3000.6 GB, 3000592982016 bytes 255 heads, 63 sectors/track, 364801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdf1 1 267350 2147483647+ ee GPT Partition 1 does not start on physical sector boundary. WARNING: GPT (GUID Partition Table) detected on '/dev/sde'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sde: 3000.6 GB, 3000592982016 bytes 255 heads, 63 sectors/track, 364801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sde1 1 267350 2147483647+ ee GPT Partition 1 does not start on physical sector boundary. Usually it tells me which ones arn't partitioned and stuff, and I only have 6 drives in my server and there's 6 showing up here so I'm only assuming the first ones already mounted and formatted correctly? I'm not really sure if anyone would help me out here. Basically I just want to format and mount these drives :)

    Read the article

  • SharePoint.DesignFactory.ContentFiles–building WCM sites

    - by svdoever
    One of the use cases where we use the SharePoint.DesignFactory.ContentFiles tooling is in building SharePoint Publishing (WCM) solutions for SharePoint 2007, SharePoint 2010 and Office365. Publishing solutions are often solutions that have one instance, the publishing site (possibly with subsites), that in most cases need to go through DTAP. If you dissect a publishing site, in most case you have the following findings: The publishing site spans a site collection The branding of the site is specified in the root site, because: Master pages live in the root site (/_catalogs/masterpage) Page layouts live in the root site (/_catalogs/masterpage) The style library lives in the root site ( /Style Library) and contains images, css, javascript, xslt transformations for your CQWP’s, … Preconfigured web parts live in the root site (/_catalogs/wp) The root site and subsites contains a document library called Pages (or your language-specific version of it) containing publishing pages using the page layouts and master pages The site collection contains content types, fields and lists When using the SharePoint.DesignFactory.ContentFiles tooling it is very easy to create, test, package and deploy the artifacts that can be uploaded to the SharePoint content database. This can be done in a fast and simple way without the need to create and deploy WSP packages. If we look at the above list of artifacts we can use SharePoint.DesignFactory.ContentFiles for master pages, page layouts, the style library, web part configurations, and initial publishing pages (these are normally made through the SharePoint web UI). Some artifacts like content types, fields and lists in the above list can NOT be handled by SharePoint.DesignFactory.ContentFiles, because they can’t be uploaded to the SharePoint content database. The good thing is that these artifacts are the artifacts that don’t change that much in the development of a SharePoint Publishing solution. There are however multiple ways to create these artifacts: Use paper script: create them manually in each of the environments based on documentation Automate the creation of the artifacts using (PowerShell) script Develop a WSP package to create these artifacts I’m not a big fan of the third option (see my blog post Thoughts on building deployable and updatable SharePoint solutions). It is a lot of work to create content types, fields and list definitions using all kind of XML files, and it is not allowed to modify these artifacts when in use. I know… SharePoint 2010 has some content type upgrade possibilities, but I think it is just too cumbersome. The first option has the problem that content types and fields get ID’s, and that these ID’s must be used by the metadata on for example page layouts. No problem for SharePoint.DesignFactory.ContentFiles, because it supports deploy-time resolving of these ID’s using PowerShell. For example consider the following metadata definition for the page layout contactpage-wcm.aspx.properties.ps1: Metadata page layout # This script must return a hashtable @{ name=value; ... } of field name-value pairs # for the content file that this script applies to. # On deployment to SharePoint, these values are written as fields in the corresponding list item (if any) # Note that fields must exist; they can be updated but not created or deleted. # This script is called right after the file is deployed to SharePoint.   # You can use the script parameters and arbitrary PowerShell code to interact with SharePoint. # e.g. to calculate properties and values at deployment time.   param([string]$SourcePath, [string]$RelativeUrl, $Context) @{     "ContentTypeId" = $Context.GetContentTypeID('GeneralPage');     "MasterPageDescription" = "Cloud Aviator Contact pagelayout (wcm - don't use)";     "PublishingHidden" = "1";     "PublishingAssociatedContentType" = $Context.GetAssociatedContentTypeInfo('GeneralPage') } The PowerShell functions GetContentTypeID and GetAssociatedContentTypeInfo can at deploy-time resolve the required information from the server we are deploying to. I personally prefer the second option: automate creation through PowerShell, because there are PowerShell scripts available to export content types and fields. An example project structure for a typical SharePoint WCM site looks like: Note that this project uses DualLayout. So if you build Publishing sites using SharePoint, checkout out the completely free SharePoint.DesignFactory.ContentFiles tooling and start flying!

    Read the article

  • PetaPoco with parameterised stored procedure and Asp.Net MVC

    - by Jalpesh P. Vadgama
    I have been playing with Micro ORMs as this is very interesting things that are happening in developer communities and I already liked the concept of it. It’s tiny easy to use and can do performance tweaks. PetaPoco is also one of them I have written few blog post about this. In this blog post I have explained How we can use the PetaPoco with stored procedure which are having parameters.  I am going to use same Customer table which I have used in my previous posts. For those who have not read my previous post following is the link for that. Get started with ASP.NET MVC and PetaPoco PetaPoco with stored procedures Now our customer table is ready. So let’s Create a simple process which will fetch a single customer via CustomerId. Following is a code for that. CREATE PROCEDURE mysp_GetCustomer @CustomerId as INT AS SELECT * FROM [dbo].Customer where CustomerId=@CustomerId Now  we are ready with our stored procedures. Now lets create code in CustomerDB class to retrieve single customer like following. using System.Collections.Generic; namespace CodeSimplified.Models { public class CustomerDB { public IEnumerable<Customer> GetCustomers() { var databaseContext = new PetaPoco.Database("MyConnectionString"); databaseContext.EnableAutoSelect = false; return databaseContext.Query<Customer>("exec mysp_GetCustomers"); } public Customer GetCustomer(int customerId) { var databaseContext = new PetaPoco.Database("MyConnectionString"); databaseContext.EnableAutoSelect = false; var customer= databaseContext.SingleOrDefault<Customer>("exec mysp_GetCustomer @customerId",new {customerId}); return customer; } } } Here in above code you can see that I have created a new method call GetCustomer which is having customerId as parameter and then I have written to code to use stored procedure which we have created to fetch customer Information. Here I have set EnableAutoSelect=false because I don’t want to create Select statement automatically I want to use my stored procedure for that. Now Our Customer DB class is ready and now lets create a ActionResult Detail in our controller like following using System.Web.Mvc; namespace CodeSimplified.Controllers { public class HomeController : Controller { public ActionResult Index() { ViewBag.Message = "Welcome to ASP.NET MVC!"; return View(); } public ActionResult About() { return View(); } public ActionResult Customer() { var customerDb = new Models.CustomerDB(); return View(customerDb.GetCustomers()); } public ActionResult Details(int id) { var customerDb = new Models.CustomerDB(); return View(customerDb.GetCustomer(id)); } } } Now Let’s create view based on that ActionResult Details method like following. Now everything is ready let’s test it in browser. So lets first goto customer list like following. Now I am clicking on details for first customer and Let’s see how we can use the stored procedure with parameter to fetch the customer details and below is the output. So that’s it. It’s very easy. Hope you liked it. Stay tuned for more..Happy Programming

    Read the article

  • Hello World Pagelet

    - by astemkov
    Introduction The goal of this exercise is to give you a basic feel of how you can use Pagelet Producer to proxy a web page We will proxy a simple static Hello World web page, cut one section out of that page and present it as a pagelet that you can later insert on your own application page or to your portal page such as WebCenter Portal space or WebCenter Interaction community page. Hello World sample app This is the static web page we will work with: Let's assume the following: The Hello World web page is running on server http://appserver.company.com:1234/ The Hello World web page path is: http://appserver.company.com:1234/helloworld/ Initial Pagelet Producer setup Let's assume that the Pagelet Producer server is running on http://pageletserver.company.com:8889/pagelets/ First let's check that Pagelet Producer is up and running. In order to do that we just need to access the following URL: http://pageletserver.company.com:8889/pagelets/ And this is what should be returned: Now you can access Pagelet Producer administration screens using this URL: http://pageletserver.company.com:8889/pagelets/admin This is how the UI looks: Now if you connect to the internet via proxy server, you need to configure proxy in Pagelet Producer settings. In the Navigator pane: Jump To - Settings Click on "Proxy" Enter your proxy server configuration: Creating a resource First thing that you need to do is to create a resource for your web page. This will tell Pagelet Producer that all sub-paths of the web page should be proxied. It also will allow you to setup common rules of how your web page should be proxied and will serve as a container for your pagelets. In the Navigator pane: Jump To - Resources Click on any existing resource (ex. welcome_resource) Click on "Create selected type" toolbar button at the top of the Navigator pane Select "Web" in the "Select Producer Type" dialog box and click "OK" Now after the resource is created let's click on "General" sub-item a specify the following values Name = AppServer Source URL = http://appserver.company.com:1234/ Destination URL = /appserver/ Click on "Save" toolbar button at the top of the Navigator pane After the resource is created our web page becomes accessible by the URL: http://pageletserver.company.com:8889/pagelets/appserver/helloworld/ So in original web page address Source URL is replaced with Pagelet Producer URL (http://pageletserver.company.com:8889/pagelets) + Destination URL Creating a pagelet Now let's create "Hello World" pagelet. Under the resource node activate Pagelets subnode Click on "Create selected type" toolbar button at the top of the Navigator pane Click on "General" sub-node of newly created pagelet and specify the following values Name = Hello_World Library = MyLib Library is used for logical grouping. The portals use the "Library" value to group pagelets in their respective UI's. For example, when adding pagelets to a WebCenter Portal space you would see the individual pagelets listed under the "Library" name. URL Suffix = helloworld/index.html this is where the Hello World page html is served from Click on "Save" toolbar button at the top of the Navigator pane The Library name can be anything you want, it doesn't have to match the resource name at all. It is used as a logical grouping of pagelets, and you can include pagelets from multiple resources into the same library or create a new library for each pagelet. After you save the pagelet you can access it here: http://pageletserver.company.com:8889/pagelets/inject/v2/pagelet/MyLib/Hello_World which is : http://pageletserver.company.com:8889/pagelets/inject/v2/pagelet/ + [Library] + [Name] Or to test the injection of a pagelet into iframe you can click on the pagelets "Documentation" sub-node and use "Access Pagelet using REST" URL: This is what we will see: Clipping The pagelet that we just created covers the whole web page, but we want just the "Hello World" segment of it. So let's clip it. Under the Hello_World pagelet node activate Clipper sub-node Click on "Create selected type" toolbar button at the top of the Navigator pane Specify a Name for newly created clipper. For example: "c1" Click on "Content" sub-node of the clipper Click on "Launch Clipper" button New browser window will open By moving a mouse pointer over the web page select the area you want to clip: Click left mouse button - the browser window will disappear and you will see that Clipping Path was automatically generated Now let's save and access the link from the "Documentation" page again Here's our pagelet nicely clipped and ready for being used on your Web Center Space

    Read the article

  • Database – Beginning with Cloud Database As A Service

    - by Pinal Dave
    I love my weekend projects. Everybody does different activities in their weekend – like traveling, reading or just nothing. Every weekend I try to do something creative and different in the database world. The goal is I learn something new and if I enjoy my learning experience I share with the world. This weekend, I decided to explore Cloud Database As A Service – Morpheus. In my career I have managed many databases in the cloud and I have good experience in managing them. I should highlight that today’s applications use multiple databases from SQL for transactions and analytics, NoSQL for documents, In-Memory for caching to Indexing for search.  Provisioning and deploying these databases often require extensive expertise and time.  Often these databases are also not deployed on the same infrastructure and can create unnecessary latency between the application layer and the databases.  Not to mention the different quality of service based on the infrastructure and the service provider where they are deployed. Moreover, there are additional problems that I have experienced with traditional database setup when hosted in the cloud: Database provisioning & orchestration Slow speed due to hardware issues Poor Monitoring Tools High network latency Now if you have a great software and expert network engineer, you can continuously work on above problems and overcome them. However, not every organization have the luxury to have top notch experts in the field. Now above issues are related to infrastructure, but there are a few more problems which are related to software/application as well. Here are the top three things which can be problems if you do not have application expert: Replication and Clustering Simple provisioning of the hard drive space Automatic Sharding Well, Morpheus looks like a product build by experts who have faced similar situation in the past. The product pretty much addresses all the pain points of developers and database administrators. What is different about Morpheus is that it offers a variety of databases from MySQL, MongoDB, ElasticSearch to Reddis as a service.  Thus users can pick and chose any combination of these databases.  All of them can be provisioned in a matter of minutes with a simple and intuitive point and click user interface.  The Morpheus cloud is built on Solid State Drives (SSD) and is designed for high-speed database transactions.  In addition it offers a direct link to Amazon Web Services to minimize latency between the application layer and the databases. Here are the few steps on how one can get started with Morpheus. Follow along with me.  First go to http://www.gomorpheus.com and register for a new and free account. Step 1: Signup It is very simple to signup for Morpheus. Step 2: Select your database   I use MySQL for my daily routine, so I have selected MySQL. Upon clicking on the big red button to add Instance, it prompted a dialogue of creating a new instance.   Step 3: Create User Now we just have to create a user in our portal which we will use to connect to a database hosted at Morpheus. Click on your database instance and it will bring you to User Screen. Over here you will notice once again a big red button to create a new user. I created a user with my first name.   Step 4: Configure your MySQL client I used MySQL workbench and connected to MySQL instance, which I had created with an IP address and user.   That’s it! You are connecting to MySQL instance. Now you can create your objects just like you would create on your local box. You will have all the features of the Morpheus when you are working with your database. Dashboard While working with Morpheus, I was most impressed with its dashboard. In future blog posts, I will write more about this feature.  Also with Morpheus you use the same process for provisioning and connecting with other databases: MongoDB, ElasticSearch and Reddis. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

< Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >