Search Results

Search found 31931 results on 1278 pages for 'sql statement'.

Page 976/1278 | < Previous Page | 972 973 974 975 976 977 978 979 980 981 982 983  | Next Page >

  • PHP delete script, return to 'viewsubjects.php?classroom_id=NO VALUE'

    - by Derek
    Hi, As the title states... I am deleting a 'subject' from a 'classroom' I view classrooms, then can click on a classroom to view the subject for that classroom. So the link where I am viewing subjects looks like: viewsubjects.php?classroom=23 When the user selects the delete button (in a row) to remove a subject from a class, I simply want the user to be redirected back to the list of subjects for the classroom (exactly where they were before!!) So I though this is simply a case of calling up the classroom ID within my delete script. Here is what I have: EDIT: corrected spelling mistake in code (this was not the problem) $subject_id = $_GET['subject_id']; $classroom_id = $_GET['classroom_id']; $sql = "DELETE FROM subjects WHERE subject_id=".$subject_id; $result = mysql_query($sql, $connection) or die("MySQL Error: ".mysql_error()); header("Location: viewsubjects.php?classroom_id=".$classroom_id); exit(); The subject is being removed from the DB, but when I am redirected back the URI is displaying with an empty classroom ID like: viewsubjects.php?classroom_id= Is there a way to carry the classroom ID through successfully through the delete script so it can be displayed after, allowing the user to be redirected back to the page? Thanks for any help!

    Read the article

  • Delete file using a link

    - by user329394
    Hi all, i want to have function like delete file from database by using link instead of button. how can i do that? do i need to use href/unlink or what? Can i do like popup confirmation wther yes or no. i know how to do that, but where should i put the code? this is the part how where system will display all filename and do direct upload. Beside each files, there will be a function for 'Remove': $qry = "SELECT * FROM table1 a, table2 b WHERE b.id = '".$rs[id]."' AND a.ptkid = '".$rs[id]."' "; $sql = get_records_sql($qry); foreach($sql as $rs){ ?> <?echo '<a href="download.php?f='.$rs->faillampiran.'">'. basename($rs->faillampiran).'</a>'; ?><td><?echo '<a href=""> [Remove]</a>';?></td><? ?><br> <? } ?> thankz all

    Read the article

  • Should I change $_REQUEST to $_POST

    - by Scarface
    Hey guys quick question, I have a checkbox system where a list of items can be checked and deleted on the click of a button. I currently use request and it does the job but I was wondering if $_REQUEST was some sort of security risk or improper. If anyone has any advice I would appreciate it. Should I change to $_POST? If so, what is the best way to go about it? foreach ($_REQUEST as $key=>$value) { if (substr($key,0,3)==="img") { $id = substr($key,3); if(isset($_REQUEST['Delete'])) { $sql = 'SELECT file_name,username FROM images WHERE id=?'; $stmt = $conn->prepare($sql); $result=$stmt->execute(array($id)); while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) { $image=$row['file_name']; $user=$row['username']; $myFile = "$user/images/$image"; unlink($myFile); } <input id=\"img".$id."\" name=\"img".$id."\" type=\"checkbox\">

    Read the article

  • Cannot Display Data from MySQL table

    - by MxmastaMills
    I've got a pretty standard call to a MySQL database and for some reason I can't get the code to work. Here's what I have: $mysqli = mysqli_connect("localhost","username","password"); if (!$mysqli) { die('Could not connect: ' . mysqli_error($mysqli)); } session_start(); $sql = "SELECT * FROM jobs ORDER BY id DESC"; $result = $mysqli->query($sql); $num_rows = mysqli_num_rows($result); Now, first, I know that it is connecting properly because I'm not getting the die method plus I added an else conditional in there previously and it checked out. Then the page displays but I get the errors: Warning: mysqli_num_rows() expects parameter 1 to be mysqli_result, boolean given in blablabla/index.php on line 11 Warning: mysqli_fetch_array() expects parameter 1 to be mysqli_result, boolean given in blablabla/index.php on line 12 I've double-checked my database and there is a table called jobs with a row of "id" (it's the primary row). The thing that confuses me is this is code that I literally copied and pasted from another site I built and for some reason the code doesn't work on this one (I obviously copy and pasted it and then just changed the table name and rows accordingly). I saw the error and tried: $num_rows = $mysqli_result->num_rows; $row_array = $mysqli_result->fetch_array; and that fixed the errors but resulted in no data being passed (because obviously $mysqli_result has no value). I don't know why the error is calling for that (is it a difference in version of MySQL or PHP from the other site)? Can someone help me track down the problem? Thanks so much. Sorry if it's something super simple that I'm overlooking, I've been at it for a while.

    Read the article

  • How do I create a self referential association (self join) in a single class using ActiveRecord in Rails?

    - by Daniel Chang
    I am trying to create a self join table that represents a list of customers who can refer each other (perhaps to a product or a program). I am trying to limit my model to just one class, "Customer". The schema is: create_table "customers", force: true do |t| t.string "name" t.integer "referring_customer_id" t.datetime "created_at" t.datetime "updated_at" end add_index "customers", ["referring_customer_id"], name: "index_customers_on_referring_customer_id" My model is: class Customer < ActiveRecord::Base has_many :referrals, class_name: "Customer", foreign_key: "referring_customer_id", conditions: {:referring_customer_id => :id} belongs_to :referring_customer, class_name: "Customer", foreign_key: "referring_customer_id" end I have no problem accessing a customer's referring_customer: @customer.referring_customer.name ... returns the name of the customer that referred @customer. However, I keep getting an empty array when accessing referrals: @customer.referrals ... returns []. I ran binding.pry to see what SQL was being run, given a customer who has a "referer" and should have several referrals. This is the SQL being executed. Customer Load (0.3ms) SELECT "customers".* FROM "customers" WHERE "customers"."id" = ? ORDER BY "customers"."id" ASC LIMIT 1 [["id", 2]] Customer Exists (0.2ms) SELECT 1 AS one FROM "customers" WHERE "customers"."referring_customer_id" = ? AND "customers"."referring_customer_id" = 'id' LIMIT 1 [["referring_customer_id", 3]] I'm a bit lost and am unsure where my problem lies. I don't think my query is correct -- @customer.referrals should return an array of all the referrals, which are the customers who have @customer.id as their referring_customer_id.

    Read the article

  • how to retrieve data from db and display it in list?

    - by raji
    In the below code I am passing catID to db.getNews(catID) super.onCreate(savedInstanceState); setContentView(R.layout.newslist); Bundle extras = getIntent().getExtras(); int catID = extras.getInt("cat_id"); mInflater = (LayoutInflater)getSystemService(Context.LAYOUT_INFLATER_SERVICE); final ListView lv = (ListView) findViewById(R.id.category); lv.setAdapter(new ArrayAdapter<String>(this, R.layout.newslist, db.getNews(catID)){ public View getView(int position, View convertView, ViewGroup parent) { View row; if (null == convertView) { row = mInflater.inflate(R.layout.newslist, null); } else { row = convertView; } TextView tv = (TextView) row.findViewById(R.id.newslist_text); tv.setText(getItem(position)); return row; } }); the getnews(int catid) function is below: public NewsItemCursor getNews(int catID) { String sql = "SELECT title FROM news WHERE catid = " + catID + " ORDER BY id ASC"; SQLiteDatabase d = getReadableDatabase(); NewsItemCursor c = (NewsItemCursor) d.rawQueryWithFactory( new NewsItemCursor.Factory(), sql, null, null); c.moveToFirst(); d.close(); return c; } But I'm getting bug as array adapter undefined... Can anyone help me to resolve this, and retrieve data to make it display in list.

    Read the article

  • MySQLi Wrapper -- will this slow down performance?

    - by Kerry
    I found the following code on php.net. I'm trying to write a wrapper for the MySQLi library to make things incredibly simple. If this is going to slow down performance, I'll skip it and find another way, if this works, then I'll do that. I have a single query function, if someone passes in more than one variable, I assume the function has to be prepared. The function that I would use to pass in an array to mysqli_stmt_bind_param is call_user_func_array, I have a feeling that is going to slow things down. Am I right? <?php /* just explaining how to call mysqli_stmt_bind_param with a parameter array */ $sql_link = mysqli_connect('localhost', 'my_user', 'my_password', 'world'); $type = "isssi"; $param = array("5", "File Description", "File Title", "Original Name", time()); $sql = "INSERT INTO file_detail (file_id, file_description, file_title, file_original_name, file_upload_date) VALUES (?, ?, ?, ?, ?)"; $sql_stmt = mysqli_prepare ($sql_link, $sql); call_user_func_array('mysqli_stmt_bind_param', array_merge (array($sql_stmt, $type), $param); mysqli_stmt_execute($sql_stmt); ?>

    Read the article

  • NSPredicate (Core Data fetch) to filter on an attribute value being present in a supplied set (list)

    - by starbaseweb
    I'm trying to create a fetch predicate that is the analog to the SQL "IN" statement, and the syntax to do so with NSPredicate escapes me. Here's what I have so far (the relevant excerpt from my fetching routine): NSFetchRequest *request = [[[NSFetchRequest alloc] init] autorelease]; NSEntityDescription *entity = [NSEntityDescription entityForName: @"BodyPartCategory" inManagedObjectContext:_context]; [request setEntity:entity]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"(name IN %@)", [RPBodyPartCategory defaultBodyPartCategoryNames]]; [request setPredicate:predicate]; The entity "BodyPartCategory" has a string attribute "name". I have a list of names (just NSString objects) in an NSArray as returned by: [RPBodyPartCategory defaultBodyPartCategoryNames] So let's say that array has string such as {@"Liver", @"Kidney", @"Thyroid"} ... etc. I want to fetch all 'BodyPartCategory' instances whose name attribute matches one of the strings in the set provided (technically NSArray but I can make it an NSSet). In SQL, this would be something like: SELECT * FROM BodyPartCategories WHERE name IN ('Liver', 'Kidney', 'Thyroid') I've gone through various portions of the Predicate Programming Guide, but I don't see this simple use case covered. Pointers/help much appreciated!

    Read the article

  • why this condition not work

    - by migo
    *strong text*I noticed that the first condition does not work if (empty($ss)) { echo "please write your search words"; } but the second work else if ($num < 1 ) { echo "not found any like "; full code if (empty($ss)) { echo "please write your search words"; } else if ($num < 1 ) { echo "not found any like "; }else { $sql=("SELECT * FROM student WHERE snum = $ss "); $rs = mysql_query($sql) or die(mysql_error()); while($data=mysql_fetch_array($rs)){ ? ??????? ????? ????? ???? ???? ?????? 100 100 100 100 100 ?????? ???????? ????? ?????? ????? ??????? ?????? ??????? } }; ?

    Read the article

  • How to make cakePHP retreive the data represented by a foreign key?

    - by XL
    Greetings cake experts, I have a question that I think would really help a lot of people getting started with cakePHP. I have a feeling it will be easy for some of you, but it is quite challenging to me. I have a simple database with multiple tables. I can't figure out how to make cakePHP display the values associated with a foreign key in an index view. Or create a view where the fields of my choice (the ones that make sense to users like location name - not location_id can be updated or viewed on a single page). I have created an example at http://lovecats.cakeapp.com that illustrate the question. If you look at the page and click the "list cats", you will notice that it shows the location_id field from the locations table. You will also notice that when you click "add cats", you must choose a location_id from the locations table. This is the automagic way that cakePHP builds the app. I want this to be the field location_name. The database is setup so that the table cats has a foreign key called location_id that has a relationship to a table called locations. This is my problem: I want these pages to display the location_name instead of the location_id. If you want to login to the application, you can go to http://cakeapp.com/sqldesigners/sql/lovecats and the password 'password' to look at the db relationships, etc. How do I have a page that shows the fields that I want? And is it possible to create a page that updates fields from all of the tables at once? This is the slice of cake that I have been trying to figure out and this would REALLY get me over a hump. You can download the app and sql from the above url.

    Read the article

  • What is an efficient way to serve images (PHP)?

    - by Scarface
    Hey guys currently I am thinking of serving images using an image handler script. I have two sources of images. One is from my web images folder where images that are used to construct my site interface are served. The other is in each users images folder where they can store their own images. I was thinking of giving each user image a unique id and then searching that id with the image handler script and serving the image, and changing the file name. The problem is that my site images folder does not have any information in the database and thus has no ids, should I just serve directly? Also this way of serving user images does not seem like the most efficient. If anyone has any suggestions I would really appreciate it. $sql="SELECT username,file_name FROM images WHERE id=?"; $stmt=$conn->prepare($sql); $result=$stmt->execute(array($ID)); while($row = $stmt->fetch(PDO::FETCH_ASSOC)){ $image_name = $row['file_name']; $username = $row['username']; } $path="$username/images/$image_name"; header("Expires: -1"); header("Cache-Control: no-store, no-cache, must-revalidate"); header("Cache-Control: post-check=0, pre-check=0", false); readfile($path);

    Read the article

  • Problem with mysql_query() ;Says resource expected .

    - by user364651
    I have this php file. The lines marked as bold are showing up the error :"mysql_query() expecets parameter 2 to be a resources. Well, the similar syntax on the line on which I have commented 'No error??' is working just fine. function checkAnswer($answerEntered,$quesId) { //This functions checks whether answer to question having ques_id = $quesId is satisfied by $answerEntered or not $sql2="SELECT keywords FROM quiz1 WHERE ques_id=$quesId"; **$result2=mysql_query($sql2,$conn);** $keywords=explode(mysql_result($result2,0)); $matches=false; foreach($keywords as $currentKeyword) { if(strcasecmp($currentKeyword,$answerEntered)==0) { $matches=true; } } return $matches; } $sql="SELECT answers FROM user_info WHERE user_id = $_SESSION[user_id]"; $result=mysql_query($sql,$conn); // No error?? $answerText=mysql_result($result,0); //Retrieve answers entered by the user $answerText=str_replace('<','',$answerText); $answerText=str_replace('>',',',$answerText); $answerText=substr($answerText,0,(strlen($answerText)-1)); $answers=explode(",",$answerText); //Get the questions that have been assigned to the user. $sql1="SELECT questions FROM user_info WHERE user_id = $_SESSION[user_id]"; **$result1=mysql_query($sql1,$conn);** $quesIdList=mysql_result($result1,0); $quesIdList=substr($quesIdList,0,(strlen($quesIdList)-1)); $quesIdArray=explode(",",$quesIdList); $reportCard=""; $i=0; foreach($quesIdArray as $currentQuesId) { $answerEnteredByUser=$answers[$i]; if(checkAnswer($answerEnteredByUser,$currentQuesId)) { $reportCard=$reportCard+"1"; } else { $reportCard=$reportCard+"0"; } $i++; } echo $reportCard; ?> Here is the file connect.php. It is working just fine for other PHP documents. <?php $conn= mysql_connect("localhost","root","password"); mysql_select_db("quiz",$conn); ?>

    Read the article

  • Javascript in php

    - by user506539
    I have a form where user enters category, it gets inserted into db and it has to go to other page to select an image, after selection image id has to get into category table. I am doing some thing like this <?php error_reporting(E_ALL ^ E_NOTICE); $con = mysql_connect("localhost","root",""); if (!$con) { die('Could not connect: ' . mysql_error()); } mysql_select_db("mydocair", $con); $ThirdPartyCategoryName =$_POST['ThirdPartyCategoryName']; $checkformembers = mysql_query("SELECT * FROM thirdpartycategorymaster WHERE ThirdPartyCategoryName = '$ThirdPartyCategoryName'"); if(mysql_num_rows($checkformembers) != 0) { header('location:newcat.php?msg=category exists'); } else { $sql="INSERT INTO thirdpartycategorymaster (ThirdPartyCategoryID, ThirdPartyCategoryName) VALUES ('$_POST[ThirdPartyCategoryID]', '$_POST[ThirdPartyCategoryName]' )"; } if (!mysql_query($sql,$con)) { die('Error: ' . mysql_error()); } mysql_close($con); ?> <?php $res = mysql_query("SELECT ThirdPartyCategoryID FROM thirdpartycategorymaster"); while($row = mysql_fetch_array($res)) { "<script type='text/javascript'> if(window.confirm("Do you want to insert image for category?")) { document.location = "imgcat.php?id="<?php echo row2['ThirdPartyCategoryID']; ?>"" } </script>"; } ?>

    Read the article

  • How to tell the Session to throw the error query[NHibernate]?

    - by xandy
    I made a test class against the repository methods shown below: public void AddFile<TFileType>(TFileType FileToAdd) where TFileType : File { try { _session.Save(FileToAdd); _session.Flush(); } catch (Exception e) { if (e.InnerException.Message.Contains("Violation of UNIQUE KEY")) throw new ArgumentException("Unique Name must be unique"); else throw e; } } public void RemoveFile(File FileToRemove) { _session.Delete(FileToRemove); _session.Flush(); } And the test class: try { Data.File crashFile = new Data.File(); crashFile.UniqueName = "NonUniqueFileNameTest"; crashFile.Extension = ".abc"; repo.AddFile(crashFile); Assert.Fail(); } catch (Exception e) { Assert.IsInstanceOfType(e, typeof(ArgumentException)); } // Clean up the file Data.File removeFile = repo.GetFiles().Where(f => f.UniqueName == "NonUniqueFileNameTest").FirstOrDefault(); repo.RemoveFile(removeFile); The test fails. When I step in to trace the problem, I found out that when I do the _session.flush() right after _session.delete(), it throws the exception, and if I look at the sql it does, it is actually submitting a "INSERT INTO" statement, which is exactly the sql that cause UNIQUE CONSTRAINT error. I tried to encapsulate both in transaction but still same problem happens. Anyone know the reason?

    Read the article

  • Oracle Blob as img src in PHP page

    - by menkes
    I have a site that currently uses images on a file server. The images appear on a page where the user can drag and drop each as is needed. This is done with jQuery and the images are enclosed in a list. Each image is pretty standard: <img src='//network_path/image.png' height='80px'> Now however I need to reference images stored as a BLOB in an Oracle database (no choice on this, so not a merit discussion). I have no problem retrieving the BLOB and displaying on it's own using: $sql = "SELECT image FROM images WHERE image_id = 123"; $stid = oci_parse($conn, $sql); oci_execute($stid); $row = oci_fetch_array($stid, OCI_ASSOC+OCI_RETURN_NULLS); $img = $row['IMAGE']->load(); header("Content-type: image/jpeg"); print $img; But I need to [efficiently] get that image as the src attribute of the img tag. I tried imagecreatefromstring() but that just returns the image in the browser, ignoring the other html. I looked at data uri, but the IE8 size limit rules that out. So now I am kind of stuck. My searches keep coming up with using a src attribute that loads another page that contains the image. But I need the image itself to actually show on the page. (Note: I say image, meaning at least one image but as many as eight on a page). Any help would be greatly appreciated.

    Read the article

  • CreationName for SSIS 2008 and adding components programmatically

    If you are building SSIS 2008 packages programmatically and adding data flow components, you will probably need to know the creation name of the component to add. I can never find a handy reference when I need one, hence this rather mundane post. See also CreationName for SSS 2005. We start with a very simple snippet for adding a component: // Add the Data Flow Task package.Executables.Add("STOCK:PipelineTask"); // Get the task host wrapper, and the Data Flow task TaskHost taskHost = package.Executables[0] as TaskHost; MainPipe dataFlowTask = (MainPipe)taskHost.InnerObject; // Add OLE-DB source component - ** This is where we need the creation name ** IDTSComponentMetaData90 componentSource = dataFlowTask.ComponentMetaDataCollection.New(); componentSource.Name = "OLEDBSource"; componentSource.ComponentClassID = "DTSAdapter.OLEDBSource.2"; So as you can see the creation name for a OLE-DB Source is DTSAdapter.OLEDBSource.2. CreationName Reference  ADO NET Destination Microsoft.SqlServer.Dts.Pipeline.ADONETDestination, Microsoft.SqlServer.ADONETDest, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 ADO NET Source Microsoft.SqlServer.Dts.Pipeline.DataReaderSourceAdapter, Microsoft.SqlServer.ADONETSrc, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Aggregate DTSTransform.Aggregate.2 Audit DTSTransform.Lineage.2 Cache Transform DTSTransform.Cache.1 Character Map DTSTransform.CharacterMap.2 Checksum Konesans.Dts.Pipeline.ChecksumTransform.ChecksumTransform, Konesans.Dts.Pipeline.ChecksumTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b Conditional Split DTSTransform.ConditionalSplit.2 Copy Column DTSTransform.CopyMap.2 Data Conversion DTSTransform.DataConvert.2 Data Mining Model Training MSMDPP.PXPipelineProcessDM.2 Data Mining Query MSMDPP.PXPipelineDMQuery.2 DataReader Destination Microsoft.SqlServer.Dts.Pipeline.DataReaderDestinationAdapter, Microsoft.SqlServer.DataReaderDest, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Derived Column DTSTransform.DerivedColumn.2 Dimension Processing MSMDPP.PXPipelineProcessDimension.2 Excel Destination DTSAdapter.ExcelDestination.2 Excel Source DTSAdapter.ExcelSource.2 Export Column TxFileExtractor.Extractor.2 Flat File Destination DTSAdapter.FlatFileDestination.2 Flat File Source DTSAdapter.FlatFileSource.2 Fuzzy Grouping DTSTransform.GroupDups.2 Fuzzy Lookup DTSTransform.BestMatch.2 Import Column TxFileInserter.Inserter.2 Lookup DTSTransform.Lookup.2 Merge DTSTransform.Merge.2 Merge Join DTSTransform.MergeJoin.2 Multicast DTSTransform.Multicast.2 OLE DB Command DTSTransform.OLEDBCommand.2 OLE DB Destination DTSAdapter.OLEDBDestination.2 OLE DB Source DTSAdapter.OLEDBSource.2 Partition Processing MSMDPP.PXPipelineProcessPartition.2 Percentage Sampling DTSTransform.PctSampling.2 Performance Counters Source DataCollectorTransform.TxPerfCounters.1 Pivot DTSTransform.Pivot.2 Raw File Destination DTSAdapter.RawDestination.2 Raw File Source DTSAdapter.RawSource.2 Recordset Destination DTSAdapter.RecordsetDestination.2 RegexClean Konesans.Dts.Pipeline.RegexClean.RegexClean, Konesans.Dts.Pipeline.RegexClean, Version=2.0.0.0, Culture=neutral, PublicKeyToken=d1abe77e8a21353e Row Count DTSTransform.RowCount.2 Row Count Plus Konesans.Dts.Pipeline.RowCountPlusTransform.RowCountPlusTransform, Konesans.Dts.Pipeline.RowCountPlusTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b Row Number Konesans.Dts.Pipeline.RowNumberTransform.RowNumberTransform, Konesans.Dts.Pipeline.RowNumberTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b Row Sampling DTSTransform.RowSampling.2 Script Component Microsoft.SqlServer.Dts.Pipeline.ScriptComponentHost, Microsoft.SqlServer.TxScript, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Slowly Changing Dimension DTSTransform.SCD.2 Sort DTSTransform.Sort.2 SQL Server Compact Destination Microsoft.SqlServer.Dts.Pipeline.SqlCEDestinationAdapter, Microsoft.SqlServer.SqlCEDest, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 SQL Server Destination DTSAdapter.SQLServerDestination.2 Term Extraction DTSTransform.TermExtraction.2 Term Lookup DTSTransform.TermLookup.2 Trash Destination Konesans.Dts.Pipeline.TrashDestination.Trash, Konesans.Dts.Pipeline.TrashDestination, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b8351fe7752642cc TxTopQueries DataCollectorTransform.TxTopQueries.1 Union All DTSTransform.UnionAll.2 Unpivot DTSTransform.UnPivot.2 XML Source Microsoft.SqlServer.Dts.Pipeline.XmlSourceAdapter, Microsoft.SqlServer.XmlSrc, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Here is a simple console program that can be used to enumerate the pipeline components installed on your machine, and dumps out a list of all components like that above. You will need to add a reference to the Microsoft.SQLServer.ManagedDTS assembly. using System; using System.Diagnostics; using Microsoft.SqlServer.Dts.Runtime; public class Program { static void Main(string[] args) { Application application = new Application(); PipelineComponentInfos componentInfos = application.PipelineComponentInfos; foreach (PipelineComponentInfo componentInfo in componentInfos) { Debug.WriteLine(componentInfo.Name + "\t" + componentInfo.CreationName); } Console.Read(); } }

    Read the article

  • How to cache dynamic javascript/jquery/ajax/json content with Akamai

    - by Starfs
    Trying to wrap my head around how things are cached on a CDN and it is new territory for me. In the document we received about sending in environment requests, it says "Dynamically-generated content will not benefit much from EdgeSuite". I feel like this is a simplified statement and there has to be a way to make it so you cache dynamically generated content if the tools are configured correctly. The site we are working with runs off a wordpress database, and uses javascript and ajax to build the pages, based on the json objects that php scripts have generated. The process - user's browser this URL, browser talks to edgesuite tools which will have cached certain pre-defined elements, and then requests from the host web server anything that is not cached, once edgesuite has compiled a combination of the two, it sends that information back to the browser. Can we not simply cache all json objects (and of course images, js, css) and therefore the web browser never has to hit the host server's database, at which point in essence, we have cached our dynamic content? Does anyone have any pointers on the most efficient configuration for this type of system -- Akamai/CDN -- to served javascript/ajax/json generated pages that ideally already hit pre-cached json data? Any and all feedback is welcome!

    Read the article

  • Clang warning flags for Objective-C development

    - by Macmade
    As a C & Objective-C programmer, I'm a bit paranoid with the compiler warning flags. I usually try to find a complete list of warning flags for the compiler I use, and turn most of them on, unless I have a really good reason not to turn it on. I personally think this may actually improve coding skills, as well as potential code portability, prevent some issues, as it forces you to be aware of every little detail, potential implementation and architecture issues, and so on... It's also in my opinion a good every day learning tool, even if you're an experienced programmer. For the subjective part of this question, I'm interested in hearing other developers (mainly C, Objective-C and C++) about this topic. Do you actually care about stuff like pedantic warnings, etc? And if yes or no, why? Now about Objective-C, I recently completely switched to the LLVM toolchain (with Clang), instead of GCC. On my production code, I usually set this warning flags (explicitly, even if some of them may be covered by -Wall): -Wall -Wbad-function-cast -Wcast-align -Wconversion -Wdeclaration-after-statement -Wdeprecated-implementations -Wextra -Wfloat-equal -Wformat=2 -Wformat-nonliteral -Wfour-char-constants -Wimplicit-atomic-properties -Wmissing-braces -Wmissing-declarations -Wmissing-field-initializers -Wmissing-format-attribute -Wmissing-noreturn -Wmissing-prototypes -Wnested-externs -Wnewline-eof -Wold-style-definition -Woverlength-strings -Wparentheses -Wpointer-arith -Wredundant-decls -Wreturn-type -Wsequence-point -Wshadow -Wshorten-64-to-32 -Wsign-compare -Wsign-conversion -Wstrict-prototypes -Wstrict-selector-match -Wswitch -Wswitch-default -Wswitch-enum -Wundeclared-selector -Wuninitialized -Wunknown-pragmas -Wunreachable-code -Wunused-function -Wunused-label -Wunused-parameter -Wunused-value -Wunused-variable -Wwrite-strings I'm interested in hearing what other developers have to say about this. For instance, do you think I missed a particular flag for Clang (Objective-C), and why? Or do you think a particular flag is not useful (or not wanted at all), and why?

    Read the article

  • Parallelism in .NET – Part 7, Some Differences between PLINQ and LINQ to Objects

    - by Reed
    In my previous post on Declarative Data Parallelism, I mentioned that PLINQ extends LINQ to Objects to support parallel operations.  Although nearly all of the same operations are supported, there are some differences between PLINQ and LINQ to Objects.  By introducing Parallelism to our declarative model, we add some extra complexity.  This, in turn, adds some extra requirements that must be addressed. In order to illustrate the main differences, and why they exist, let’s begin by discussing some differences in how the two technologies operate, and look at the underlying types involved in LINQ to Objects and PLINQ . LINQ to Objects is mainly built upon a single class: Enumerable.  The Enumerable class is a static class that defines a large set of extension methods, nearly all of which work upon an IEnumerable<T>.  Many of these methods return a new IEnumerable<T>, allowing the methods to be chained together into a fluent style interface.  This is what allows us to write statements that chain together, and lead to the nice declarative programming model of LINQ: double min = collection .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Other LINQ variants work in a similar fashion.  For example, most data-oriented LINQ providers are built upon an implementation of IQueryable<T>, which allows the database provider to turn a LINQ statement into an underlying SQL query, to be performed directly on the remote database. PLINQ is similar, but instead of being built upon the Enumerable class, most of PLINQ is built upon a new static class: ParallelEnumerable.  When using PLINQ, you typically begin with any collection which implements IEnumerable<T>, and convert it to a new type using an extension method defined on ParallelEnumerable: AsParallel().  This method takes any IEnumerable<T>, and converts it into a ParallelQuery<T>, the core class for PLINQ.  There is a similar ParallelQuery class for working with non-generic IEnumerable implementations. This brings us to our first subtle, but important difference between PLINQ and LINQ – PLINQ always works upon specific types, which must be explicitly created. Typically, the type you’ll use with PLINQ is ParallelQuery<T>, but it can sometimes be a ParallelQuery or an OrderedParallelQuery<T>.  Instead of dealing with an interface, implemented by an unknown class, we’re dealing with a specific class type.  This works seamlessly from a usage standpoint – ParallelQuery<T> implements IEnumerable<T>, so you can always “switch back” to an IEnumerable<T>.  The difference only arises at the beginning of our parallelization.  When we’re using LINQ, and we want to process a normal collection via PLINQ, we need to explicitly convert the collection into a ParallelQuery<T> by calling AsParallel().  There is an important consideration here – AsParallel() does not need to be called on your specific collection, but rather any IEnumerable<T>.  This allows you to place it anywhere in the chain of methods involved in a LINQ statement, not just at the beginning.  This can be useful if you have an operation which will not parallelize well or is not thread safe.  For example, the following is perfectly valid, and similar to our previous examples: double min = collection .AsParallel() .Select(item => item.SomeOperation()) .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); However, if SomeOperation() is not thread safe, we could just as easily do: double min = collection .Select(item => item.SomeOperation()) .AsParallel() .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); In this case, we’re using standard LINQ to Objects for the Select(…) method, then converting the results of that map routine to a ParallelQuery<T>, and processing our filter (the Where method) and our aggregation (the Min method) in parallel. PLINQ also provides us with a way to convert a ParallelQuery<T> back into a standard IEnumerable<T>, forcing sequential processing via standard LINQ to Objects.  If SomeOperation() was thread-safe, but PerformComputation() was not thread-safe, we would need to handle this by using the AsEnumerable() method: double min = collection .AsParallel() .Select(item => item.SomeOperation()) .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .AsEnumerable() .Min(item => item.PerformComputation()); Here, we’re converting our collection into a ParallelQuery<T>, doing our map operation (the Select(…) method) and our filtering in parallel, then converting the collection back into a standard IEnumerable<T>, which causes our aggregation via Min() to be performed sequentially. This could also be written as two statements, as well, which would allow us to use the language integrated syntax for the first portion: var tempCollection = from item in collection.AsParallel() let e = item.SomeOperation() where (e.SomeProperty > 6 && e.SomeProperty < 24) select e; double min = tempCollection.AsEnumerable().Min(item => item.PerformComputation()); This allows us to use the standard LINQ style language integrated query syntax, but control whether it’s performed in parallel or serial by adding AsParallel() and AsEnumerable() appropriately. The second important difference between PLINQ and LINQ deals with order preservation.  PLINQ, by default, does not preserve the order of of source collection. This is by design.  In order to process a collection in parallel, the system needs to naturally deal with multiple elements at the same time.  Maintaining the original ordering of the sequence adds overhead, which is, in many cases, unnecessary.  Therefore, by default, the system is allowed to completely change the order of your sequence during processing.  If you are doing a standard query operation, this is usually not an issue.  However, there are times when keeping a specific ordering in place is important.  If this is required, you can explicitly request the ordering be preserved throughout all operations done on a ParallelQuery<T> by using the AsOrdered() extension method.  This will cause our sequence ordering to be preserved. For example, suppose we wanted to take a collection, perform an expensive operation which converts it to a new type, and display the first 100 elements.  In LINQ to Objects, our code might look something like: // Using IEnumerable<SourceClass> collection IEnumerable<ResultClass> results = collection .Select(e => e.CreateResult()) .Take(100); If we just converted this to a parallel query naively, like so: IEnumerable<ResultClass> results = collection .AsParallel() .Select(e => e.CreateResult()) .Take(100); We could very easily get a very different, and non-reproducable, set of results, since the ordering of elements in the input collection is not preserved.  To get the same results as our original query, we need to use: IEnumerable<ResultClass> results = collection .AsParallel() .AsOrdered() .Select(e => e.CreateResult()) .Take(100); This requests that PLINQ process our sequence in a way that verifies that our resulting collection is ordered as if it were processed serially.  This will cause our query to run slower, since there is overhead involved in maintaining the ordering.  However, in this case, it is required, since the ordering is required for correctness. PLINQ is incredibly useful.  It allows us to easily take nearly any LINQ to Objects query and run it in parallel, using the same methods and syntax we’ve used previously.  There are some important differences in operation that must be considered, however – it is not a free pass to parallelize everything.  When using PLINQ in order to parallelize your routines declaratively, the same guideline I mentioned before still applies: Parallelization is something that should be handled with care and forethought, added by design, and not just introduced casually.

    Read the article

  • In HLSL pixel shader , why is SV_POSITION different to other semantics?

    - by tina nyaa
    In my HLSL pixel shader, SV_POSITION seems to have different values to any other semantic I use. I don't understand why this is. Can you please explain it? For example, I am using a triangle with the following coordinates: (0.0f, 0.5f) (0.5f, -0.5f) (-0.5f, -0.5f) The w and z values are 0 and 1, respectively. This is the pixel shader. struct VS_IN { float4 pos : POSITION; }; struct PS_IN { float4 pos : SV_POSITION; float4 k : LOLIMASEMANTIC; }; PS_IN VS( VS_IN input ) { PS_IN output = (PS_IN)0; output.pos = input.pos; output.k = input.pos; return output; } float4 PS( PS_IN input ) : SV_Target { // screenshot 1 return input.pos; // screenshot 2 return input.k; } technique10 Render { pass P0 { SetGeometryShader( 0 ); SetVertexShader( CompileShader( vs_4_0, VS() ) ); SetPixelShader( CompileShader( ps_4_0, PS() ) ); } } Screenshot 1: http://i.stack.imgur.com/rutGU.png Screenshot 2: http://i.stack.imgur.com/NStug.png (Sorry, I'm not allowed to post images until I have a lot of 'reputation') When I use the first statement (result is first screenshot), the one that uses the SV_POSITION semantic, the result is completely unexpected and is yellow, whereas using any other semantic will produce the expected result. Why is this?

    Read the article

  • "const char *" is incompatible with parameter of type "LPCWSTR" error

    - by N0xus
    I'm trying to incorporate some code from Programming an RTS Game With Direct3D into my game. Before anyone says it, I know the book is kinda old, but it's the particle effects system he creates that I'm trying to use. With his shader class, he intialise it thusly: void SHADER::Init(IDirect3DDevice9 *Dev, const char fName[], int typ) { m_pDevice = Dev; m_type = typ; if(m_pDevice == NULL)return; // Assemble and set the pixel or vertex shader HRESULT hRes; LPD3DXBUFFER Code = NULL; LPD3DXBUFFER ErrorMsgs = NULL; if(m_type == PIXEL_SHADER) hRes = D3DXCompileShaderFromFile(fName, NULL, NULL, "Main", "ps_2_0", D3DXSHADER_DEBUG, &Code, &ErrorMsgs, &m_pConstantTable); else hRes = D3DXCompileShaderFromFile(fName, NULL, NULL, "Main", "vs_2_0", D3DXSHADER_DEBUG, &Code, &ErrorMsgs, &m_pConstantTable); } How ever, this generates the following error: Error 1 error C2664: 'D3DXCompileShaderFromFileW' : cannot convert parameter 1 from 'const char []' to 'LPCWSTR' The compiler states the issue is with fName in the D3DXCompileShaderFromFile line. I know this has something to do with the character set, and my program was already running with a Unicode Character set on the go. I read that to solve the above problem, I need to switch to a multi-byte character set. But, if I do that, I get other errors in my code, like so: Error 2 error C2664: 'D3DXCreateEffectFromFileA' : cannot convert parameter 2 from 'const wchar_t *' to 'LPCSTR' With it being accredited to the following line of code: if(FAILED(D3DXCreateEffectFromFile(m_pD3DDevice9,effectFileName.c_str(),NULL,NULL,0,NULL,&m_pCurrentEffect,&pErrorBuffer))) This if is nested within another if statement checking my effectmap list. Though it is the FAILED word with the red line. Like wise I get the another error with the following line of code: wstring effectFileName = TEXT("Sky.fx"); With the error message being: Error 1 error C2440: 'initializing' : cannot convert from 'const char [7]' to 'std::basic_string<_Elem,_Traits,_Ax' If I change it back to a Uni code character set, I get the original (fewer) errors. Leaving as a multi-byte, I get more errors. Does anyone know of a way I can fix this issue?

    Read the article

  • MySQL at Mobile World Congress (on Valentine's Day...)

    - by mat.keep(at)oracle.com
    It is that time of year again when the mobile communications industry converges on Barcelona for what many regard as the premier telecommunications show of the year.Starting on February 14th, what better way for a Brit like me to spend Valentines Day with 50,000 mobile industry leaders (my wife doesn't tend to read this blog, so I'm reasonably safe with that statement).As ever, Oracle has an extensive presence at the show, and part of that presence this year includes MySQL.We will be running a live demonstration of the MySQL Cluster database on Booth 7C18 in the App Planet.The demonstration will show how the MySQL Cluster Connector for Java is implemented to provide native connectivity to the carrier grade MySQL Cluster database from Java ME clients via Java SE virtual machines and Java EE servers.  The demonstration will show how end-to-end Java services remain continuously available during both catastrophic failures and scheduled maintenance activities.The MySQL Cluster Connector for Java provides both a native Java API and JPA plug-in that directly maps Java objects to relational tables stored in the MySQL Cluster database, without the overhead and complexity of having to transform objects to JDBC, and then SQL  The result is 10x higher throughput, and a simpler development model for Java engineers.Stop by the stand for a demonstration, and an opportunity to speak with the MySQL telecoms team who will share experiences on how MySQL is being used to bring the innovation of the web to the carrier network.Of course, if you can't make it to Barcelona, you can still learn more about the MySQL Cluster Connector for Java from this whitepaper and are free to download it as part of MySQL Cluster Community Edition  Let us know via the comments if you have Java applications that you think will benefit from the MySQL Cluster Connector for JavaI can't promise that Valentines Day at MWC will be the time you fall in love with MySQL Cluster...but I'm confident you will at least develop a healthy respect for it  

    Read the article

  • Implementing Release Notes in TFS Team Build 2010

    - by Jakob Ehn
    In TFS Team Build (all versions), each build is associated with changesets and work items. To determine which changesets that should be associated with the current build, Team Build finds the label of the “Last Good Build” an then aggregates all changesets up unitl the label for the current build. Basically this means that if your build is failing, every changeset that is checked in will be accumulated in this list until the build is successful. All well, but there uis a dimension missing here, regarding to releases. Often you can run several release builds until you actually deploy the result of the build to a test or production system. When you do this, wouldn’t it be nice to be able to send the customer a nice release note that contain all work items and changeset since the previously deployed version? At our company, we have developed a Release Repository, which basically is a siple web site with a SQL database as storage. Every time we run a Release Build, the resulting installers, zip-files, sql scripts etc, gets pushed into the release repositor together with the relevant build information. This information contains things such as start time, who triggered the build etc. Also, it contains the associated changesets and work items. When deploying the MSI’s for a new version, we mark the build as Deployed in the release repository. The depoyed status is stored in the release repository database, but it could also have been implemented by setting the Build Quality for that build to Deployed. When generating the release notes, the web site simple runs through each release build back to the previous build that was marked as Deplyed, and aggregates the work items and changesets: Here is a sample screenshot on how this looks for a sample build/application The web site is available both for us and also for the customers and testers, which means that they can easily get the latest version of a particular application and at the same time see what changes are included in this version. There is a lot going on in the Release Build Process that drives this in our TFS 2010 server, but in this post I will show how you can access and read the changeset and work item information in a custom activity. Since Team Build associates changesets and work items for each build, this information is (partially) available inside the build process template. The Associate Changesets and Work Items for non-Shelveset Builds activity (located inside the Try  Compile, Test, and Associate Changesets and Work Items activity) defines and populates a variable called associatedWorkItems   You can see that this variable is an IList containing instances of the Changeset class (from the Microsoft.TeamFoundation.VersionControl.Client namespace). Now, if you want to access this variable later on in the build process template, you need to declare a new variable in the corresponding scope and the assign the value to this variable. In this sample, I declared a variable called assocChangesets in the RunAgent sequence, which basically covers the whol compile, test and drop part of the build process:   Now, you need to assign the value from the AssociatedChangesets to this variable. This is done using the Assign workflow activity:   Now you can add a custom activity any where inside the RunAgent sequence and use this variable. NB: Of course your activity must place somewhere after the variable has been poplated. To finish off, here is code snippet that shows how you can read the changeset and work item information from the variable.   First you add an InArgumet on your activity where you can pass i the variable that we defined. [RequiredArgument] public InArgument<IList<Changeset>> AssociatedChangesets { get; set; } Then you can traverse all the changesets in the list, and for each changeset use the WorkItems property to get the work items that were associated in that changeset: foreach (Changeset ch in associatedChangesets) { // Add change theChangesets.Add( new AssociatedChangeset(ch.ChangesetId, ch.ArtifactUri, ch.Committer, ch.Comment, ch.ChangesetId)); foreach (var wi in ch.WorkItems) { theWorkItems.Add( new AssociatedWorkItem(wi["System.AssignedTo"].ToString(), wi.Id, wi["System.State"].ToString(), wi.Title, wi.Type.Name, wi.Id, wi.Uri)); } } NB: AssociatedChangeset and AssociatedWorkItem are custom classes that we use internally for storing this information that is eventually pushed to the release repository.

    Read the article

  • Creating block devices for openstack deployment using MAAS and juju (nova-volume deployment)

    - by Tom Van Hoof
    Hi, I'm currently trying to get a openstack deployment working by using MAAS with 9 nodes and juju. To do this I found this guide, working with ubuntu 12.04 LTS https://help.ubuntu.com/community/UbuntuCloudInfrastructure and followed it as good as I can. After a vigorous amount of trial and error I finally got to the point where I'm supposed to deploy nova-volume using the "custom" config file. However when my node is started and shows up as running in the "juju status" report the service reports the installation failed. I'm trying to install with juju jitsu by the way. I think it has something to do with the following statement in the openstack.cfg file : nova-volume: # This must be a free block device that is writable on the nova-volume host. block-device: "xvdb" overwrite: "true" I did some research and found that (at least I think) this refers to a Xen Virtual Drive/device, and because the device is not present on the node it's being deployed to, the installation fails. What I don't understand is how I am supposed to have such a block device available on a machine which was completely managed by MAAS. Does anyone here have any experience with this and knows of a way to solve this, or am I missing something big here. Some kind of missing link between the MAAS and a separate XEN host ? My MAAS server is ubuntu 12.04LTS Server. All help is welcome. Kind regards, Tom

    Read the article

  • Would you re-design completely under .Net?

    - by dboarman
    A very extensive application began as an Access-based system (for database storage). Forms were written in VB5 and/or VB6. As .Net became a fixture in the development community, certain modules have been rewritten. This seems very awkward and potentially costly just to maintain because of the cross-technologies and extra work to keep the two technologies happy with each other. Of course, the application uses a mix of ODBC OleDb and MySql. Would you think spending the time and resources to completely re-develop the application under .Net would be more cost effective? In an effort to develop a more stable application, wouldn't it make sense to use .Net? Or continue chasing Access bugs, adding new features in .Net (which may or may not create new bugs between .Net and Access), and rewriting old Access modules into .Net modules under time constraints that prevent proper design and development? Update The application uses OleDb and MySql - I corrected my previous statement. Also, to lend further support to rewriting: I have since found out that when the "porting" to .Net began, the VBA/VB6 code that existed was basically translated to the .Net equivalent. From my understanding, nothing was done to improve performance, or take advantage of new libraries or technologies. In my opinion, this creates a very fragile and unstable application. With every new update, this becomes more and more visible. As a help desk technician, I have noticed an increase in problems reported. The customers using the software have noticed an increase in problems and are commenting on it.

    Read the article

< Previous Page | 972 973 974 975 976 977 978 979 980 981 982 983  | Next Page >