Search Results

Search found 35571 results on 1423 pages for 'sql developer'.

Page 839/1423 | < Previous Page | 835 836 837 838 839 840 841 842 843 844 845 846  | Next Page >

  • Delphi: how to create Firebird database programmatically

    - by Brad
    I'm using D2K9, Zeos 7Alpha, and Firebird 2.1 I had this working before I added the autoinc field. Although I'm not sure I was doing it 100% correctly. I don' know what order to do the SQL code, with the triggers, Generators, etc.. I've tried several combinations, I'm guessing I'm doing something wrong other than just that for this not to work. SQL File From IB Expert : /********************************************/ /* Generated by IBExpert 5/4/2010 3:59:48 PM / /*********************************************/ /********************************************/ /* Following SET SQL DIALECT is just for the Database Comparer / /*********************************************/ SET SQL DIALECT 3; /********************************************/ /* Tables / /*********************************************/ CREATE GENERATOR GEN_EMAIL_ACCOUNTS_ID; CREATE TABLE EMAIL_ACCOUNTS ( ID INTEGER NOT NULL, FNAME VARCHAR(35), LNAME VARCHAR(35), ADDRESS VARCHAR(100), CITY VARCHAR(35), STATE VARCHAR(35), ZIPCODE VARCHAR(20), BDAY DATE, PHONE VARCHAR(20), UNAME VARCHAR(255), PASS VARCHAR(20), EMAIL VARCHAR(255), CREATEDDATE DATE, "ACTIVE" BOOLEAN DEFAULT 0 NOT NULL /* BOOLEAN = SMALLINT CHECK (value is null or value in (0, 1)) /, BANNED BOOLEAN DEFAULT 0 NOT NULL / BOOLEAN = SMALLINT CHECK (value is null or value in (0, 1)) /, "PUBLIC" BOOLEAN DEFAULT 0 NOT NULL / BOOLEAN = SMALLINT CHECK (value is null or value in (0, 1)) */, NOTES BLOB SUB_TYPE 0 SEGMENT SIZE 1024 ); /********************************************/ /* Primary Keys / /*********************************************/ ALTER TABLE EMAIL_ACCOUNTS ADD PRIMARY KEY (ID); /********************************************/ /* Triggers / /*********************************************/ SET TERM ^ ; /********************************************/ /* Triggers for tables / /*********************************************/ /* Trigger: EMAIL_ACCOUNTS_BI */ CREATE OR ALTER TRIGGER EMAIL_ACCOUNTS_BI FOR EMAIL_ACCOUNTS ACTIVE BEFORE INSERT POSITION 0 AS BEGIN IF (NEW.ID IS NULL) THEN NEW.ID = GEN_ID(GEN_EMAIL_ACCOUNTS_ID,1); END ^ SET TERM ; ^ /********************************************/ /* Privileges / /*********************************************/ Triggers: /********************************************/ /* Following SET SQL DIALECT is just for the Database Comparer / /*********************************************/ SET SQL DIALECT 3; CREATE GENERATOR GEN_EMAIL_ACCOUNTS_ID; SET TERM ^ ; CREATE OR ALTER TRIGGER EMAIL_ACCOUNTS_BI FOR EMAIL_ACCOUNTS ACTIVE BEFORE INSERT POSITION 0 AS BEGIN IF (NEW.ID IS NULL) THEN NEW.ID = GEN_ID(GEN_EMAIL_ACCOUNTS_ID,1); END ^ SET TERM ; ^ Generators: CREATE SEQUENCE GEN_EMAIL_ACCOUNTS_ID; ALTER SEQUENCE GEN_EMAIL_ACCOUNTS_ID RESTART WITH 2; /* Old syntax is: CREATE GENERATOR GEN_EMAIL_ACCOUNTS_ID; SET GENERATOR GEN_EMAIL_ACCOUNTS_ID TO 2; */ My Code: procedure TForm2.New1Click(Sender: TObject); var query:string; begin if JvOpenDialog1.Execute then begin ZConnection1.Disconnect; ZConnection1.Database:= jvOpenDialog1.FileName; if not FileExists(ZConnection1.database) then begin ZConnection1.Properties.Add('createnewdatabase=create database '''+ZConnection1.Database+''' user ''sysdba'' password ''masterkey'' page_size 4096 default character set iso8859_2;'); try ZConnection1.Connect; except ShowMessage('Error Connection To Database File'); application.Terminate; end; end else begin ShowMessage('Database File Already Exists.'); exit; end; end; query := 'CREATE DOMAIN BOOLEAN AS SMALLINT CHECK (value is null or value in (0, 1))'; Zconnection1.ExecuteDirect(query); query:='CREATE TABLE EMAIL_ACCOUNTS (ID INTEGER NOT NULL,FNAME VARCHAR(35),LNAME VARCHAR(35),'+ 'ADDRESS VARCHAR(100), CITY VARCHAR(35), STATE VARCHAR(35), ZIPCODE VARCHAR(20),' + 'BDAY DATE, PHONE VARCHAR(20), UNAME VARCHAR(255), PASS VARCHAR(20),' + 'EMAIL VARCHAR(255),CREATEDDATE DATE , '+ '"ACTIVE" BOOLEAN DEFAULT 0 NOT NULL,'+ 'BANNED BOOLEAN DEFAULT 0 NOT NULL,'+ '"PUBLIC" BOOLEAN DEFAULT 0 NOT NULL,' + 'NOTES BLOB SUB_TYPE 0 SEGMENT SIZE 1024)'; //ZConnection.ExecuteDirect('CREATE TABLE NOTES (noteTitle TEXT PRIMARY KEY,noteDate DATE,noteNote TEXT)'); Zconnection1.ExecuteDirect(query); { } query := 'CREATE SEQUENCE GEN_EMAIL_ACCOUNTS_ID;'+ 'ALTER SEQUENCE GEN_EMAIL_ACCOUNTS_ID RESTART WITH 1'; Zconnection1.ExecuteDirect(query); query := 'ALTER TABLE EMAIL_ACCOUNTS ADD PRIMARY KEY (ID)'; Zconnection1.ExecuteDirect(query); query := 'SET TERM ^'; Zconnection1.ExecuteDirect(query); query := 'CREATE OR ALTER TRIGGER EMAIL_ACCOUNTS_BI FOR EMAIL_ACCOUNTS'+ 'ACTIVE BEFORE INSERT POSITION 0'+ 'AS'+ 'BEGIN'+ 'IF (NEW.ID IS NULL) THEN'+ 'NEW.ID = GEN_ID(GEN_EMAIL_ACCOUNTS_ID,1);'+ 'END'+ '^'+ 'SET TERM ; ^'; Zconnection1.ExecuteDirect(query); ZTable1.Active:=true; end;

    Read the article

  • Implementing Unit Testing with the iPhone SDK

    - by ing0
    Hi, So I've followed this tutorial to setup unit testing on my app when I got a little stuck. At bullet point 8 in that tutorial it shows this image, which is what I should be expecting when I build: However this isn't what I get when I build. I get this error message: Command /bin/sh failed with exit code 1 as well as the error message the unit test has created. Then, when I expand on the first error I get this: PhaseScriptExecution "Run Script" "build/3D Pool.build/Debug-iphonesimulator/LogicTests.build/Script-1A6BA6AE10F28F40008AC2A8.sh" cd "/Users/james/Desktop/FYP/3D Pool" setenv ACTION build setenv ALTERNATE_GROUP staff ... setenv XCODE_VERSION_MAJOR 0300 setenv XCODE_VERSION_MINOR 0320 setenv YACC /Developer/usr/bin/yacc /bin/sh -c "\"/Users/james/Desktop/FYP/3D Pool/build/3D Pool.build/Debug-iphonesimulator/LogicTests.build/Script-1A6BA6AE10F28F40008AC2A8.sh\"" /Developer/Tools/RunPlatformUnitTests.include:412: note: Started tests for architectures 'i386' /Developer/Tools/RunPlatformUnitTests.include:419: note: Running tests for architecture 'i386' (GC OFF) objc[12589]: GC: forcing GC OFF because OBJC_DISABLE_GC is set Test Suite '/Users/james/Desktop/FYP/3D Pool/build/Debug-iphonesimulator/LogicTests.octest(Tests)' started at 2010-01-04 21:05:06 +0000 Test Suite 'LogicTests' started at 2010-01-04 21:05:06 +0000 Test Case '-[LogicTests testFail]' started. /Users/james/Desktop/FYP/3D Pool/LogicTests.m:17: error: -[LogicTests testFail] : Must fail to succeed. Test Case '-[LogicTests testFail]' failed (0.000 seconds). Test Suite 'LogicTests' finished at 2010-01-04 21:05:06 +0000. Executed 1 test, with 1 failure (0 unexpected) in 0.000 (0.000) seconds Test Suite '/Users/james/Desktop/FYP/3D Pool/build/Debug-iphonesimulator/LogicTests.octest(Tests)' finished at 2010-01-04 21:05:06 +0000. Executed 1 test, with 1 failure (0 unexpected) in 0.000 (0.002) seconds /Developer/Tools/RunPlatformUnitTests.include:448: error: Failed tests for architecture 'i386' (GC OFF) /Developer/Tools/RunPlatformUnitTests.include:462: note: Completed tests for architectures 'i386' Command /bin/sh failed with exit code 1 Now this is very odd as it is running the tests (and succeeding as you can see my STFail firing) because if I add a different test which passes I get no errors, so the tests are working fine. But why am I getting this extra build fail? It may also be of note that when downloading solutions/templates which should work out the box, I get the same error. I'm guessing I've set something up wrong here but I've just followed a tutorial 100% correctly!! If anyone could help I'd be very grateful! Thanks EDIT: According to this blog, this post and a few other websites, I'm not the only one getting this problem. It has been like this since the release of xCode 3.2, assuming the apple dev center documents and tutorials etc are pre-3.2 as well. However some say its a known issue whereas others seem to think this was intentional. I for one would like both the extended console and in code messages, and I certainly do not like the "Command /bin/sh..." error and really think they would have documented such an update. Hopefully it will be fixed soon anyway. UPDATE: Here's confirmation it's something changed since the release of xCode 3.2.1. This image: is from my test build using 3.2.1. This one is from an older version (3.1.4): . (The project for both was unchanged). Edit: Image URLS updated.

    Read the article

  • What can I do to improve a project if there is a no-listening situation. Developers vs Management

    - by NazGul
    Hi all, I hope that I'm not the only one and I can get a answer from someone with more experience than me, so I can think cleaner and I don't get depressed with this developer's life. I'm working as developer for a small company three years now. In that three years I'm working in the same project and sincerely, I think this project could be used as a CASE STUDY because it has all the situations that cannot happen in a project and that makes a project fails. To begin with, and I believe you've already noticed, the project has 3 years already (develoment only) and is still unfinished, because in every meeting there is a "new priority" ,or a "new problem" to be solve or a "new feature" to be add. So, first problem is no target set. How can you know when something is finished if you don't know what you want? I understand Management, because they see an oportunity and try to get that, but I don't understand how can they not see (or hear us) that they'll lose all they already have and what they'll eventually get. Second, there is no team group. My team consists of three people, a Senior Developer, a DBA and, finally, I for all the work (support, testing, new features, bug fixing, meeting, projet management of clients, etc) aka Junior Developer. The first (senior developer), does not perform any tests on his changes, so, most of the time, his changes give us problems (us = me, since I'm the one who will fix it). The second (DBA) is an uncompromising person and you can not talk to him, believe me, I tried! In his view, everything he does is fantastic... even if it is the most complicated to make it... And he does everything he wants, even if we need that only for 5 months later and would help some extra-hand to do the things we have to do for now. As you can see, there is very hard to work with no help... Third, there is no testings. Every... I repeat, Every release of the project, the customers wants to kill us, because there is a lot of bugs. Management? They say that they want tests before the release. Us? We say the same. Time? No time. Management? There is always some time to open the application and click in some buttons. Us? Try to explain that it is not so simple. Management doesn't care... end of story. Actually, must of the bugs could be avoid with a rigorous work... Some people just want to do the show to the Management. "Did you ask for this? Cool, it's done. Bugs? The Do-all-the-work guy will solve." Unfortunally for me, sometimes the Do-all-the-work also has to finish it. And to makes this all better, I'm the person who will listen the complaints from the customers. Cool, huh? I know, everyone makes mistakes. But there is mistakes and mistakes... To complete, in the Management view, "the problem is the lack of an individual project management", because we cannot do all the stuff they ask, even if there is no PM for the project itself. And ask us to work overtime without any reward... I do say all this stuff to the management and others members, but by telling this, the I'm the bad guy, the guy who is complain when everything is going well... but we need to work overtime... sigh What can I do to make it works? Anyone has a situation like this, what did you do? I hope you could understand my problem, my English is a little rusty. Thanks.

    Read the article

  • Soapui & populating database before

    - by mada
    SoapUi & database data needed Hi, In a J2ee project( spring ws + hibernate +postgresql+maven 2 + tomcat),We have created webservices with full crud operations. We are currently testing them with SoapUi (not yet integrated in the integration-test maven phase ). We need tomcat to test them. Actually before the phase test, with a maven plugin, only one time, some datas are injected in the database before the launch of all tests through many sql files. But the fact is that to test some webservices i need some data entry & their Id (primary key) A- What is the best way to create them ? 1- populate one of the previous .sql test file. But now we have a big dependency where i have to be careful to use the same id use in a insert in the sql file in SoapUi too. And if any developper drop the value, the test will broke. 2- create amongst the .sql file test a soapui.sql where ALL data needed for the Sopaui test will be concentrated. The advantage is that the maintenance will be much more easy. 3-because i have Ws available for all crud operations, i can create before any testsuite a previous testsuite called setup-testsuiteX. In this one, i will use those Ws to inject datas in the databse & save id in variables thanks to the feature "property transfer" 4-launch the soapui test xml file with a junit java test & in the Setup Method (we are using spring test), we will populate the database with DbUnit. Drawback: they are already data in the database & some conflicts may appear due to constraint.And now i have a dependency between: - dbUnit Xml file (with the value primary key) - sopaui test where THOSe value of primarykey will be used advantage: A automatic rollback is possible for DbUnit Data but i dont know how it will react because those data have been used with the Ws. 5- your suggestion ? better idea ? Testing those Ws create datas in the DB. How is the best way to remove them at the end ? (in how daostest we were using a automatic rollback with spring test to let the database clean) Thanks in advance for your hep & sorry for my english, it is not my mother thongue. Regards.

    Read the article

  • Executing procedural queries in perl

    - by KalpaG
    I have a query like this set @valid_total:=0; set @invalid_total:=0; select week as weekno, measured_week,project_id as project, role_category_id as role_category, valid_count,valid_tickets, (@valid_total := @valid_total + valid_count) as valid_total, invalid_count,invalid_tickets, (@invalid_total := @invalid_total + invalid_count) as invalid_total from metric_fault_bug_project where measured_week = yearweek(curdate()) and role_category_id = 1 and project_id = 11; it executes fine in heidi (MySQL client) but when it comes to perl, it gives me this error DBD::mysql::st execute failed: You have an error in your SQL syntax; check the m anual that corresponds to your MySQL server version for the right syntax to use near ':=0; set :=0; select week as weekno, measured_week,project_id as project' at line 1 at D:\Mx\scripts\test.pl line 35. Can't execute SQL statement: You have an error in your SQL syntax; check the man ual that corresponds to your MySQL server version for the right syntax to use ne ar ':=0; set :=0; select week as weekno, measured_week,project_id as project' at line 1 The problem seem to be in the set @valid_total := 0; line. I am fairly new to Perl. Can anyone help? this is the complete perl code #!/usr/bin/perl #use lib '/x01/home/kalpag/libs'; use DBI; use strict; use warnings; my $sid = 'issues'; my $user = 'root'; my $passwd = 'kalpa'; my $connection = "DBI:mysql:database=$sid;host=localhost"; my $dbhh = DBI->connect( $connection, $user, $passwd) || die "Database connection not made: $DBI::errstr"; my $sql_query = 'set @valid_total:=0; set @invalid_total:=0; select week as weekno, measured_week,project_id, role_category_id as role_category, valid_count,valid_tickets, (@valid_total := @valid_total + valid_count) as valid_total, invalid_count,invalid_tickets, (@invalid_total := @invalid_total + invalid_count) as invalid_total from metric_fault_bug_project where measured_week = yearweek(curdate()) and role_category_id = 1 and project_id = 11'; my $sth = $dbhh->prepare($sql_query) or die "Can't prepare SQL statement: $DBI::errstr\n"; $sth->execute() or die "Can't execute SQL statement: $DBI::errstr\n"; while ( my @memory = $sth->fetchrow() ) { print "@memory \n"; }

    Read the article

  • Can't Set Crystal Report Selection Formula Programatically

    - by eidylon
    Hello all, first, I can't stand Crystal! Okay, that's off my chest... Now, we have an old VB6 app we maintain for a client, which uses the Crystal Automation library to programatically change the record selection formulas in a bunch of Crystal Reports 8.5 reports. There are two reports which are ALMOST identical. I had to change them recently to add another field from another table. When I added the table in to the reports though, while it added it in the visual designer, it did not add it in the FROM clause of the SQL statement. So, I manually edited the SQL statement to add in the additional join. KO, works great. If i run the reports in Crystal preview mode, they work exactly as expected. Now, the users went to test the changes from within the VB app. One of the reports works fine and dandy. The other report however, is failing to set the selection formula as expected. The code sets the selection formulas using the function PESetSelectionFormula. I verified that the string being passed in to the function as the new selection formula is correct via a step-through examination of the variables. The call to PESetSelectionFormula seems to be working okay, and is returning a value of 1, which as near as I can find anywhere indicates success. (The other report, which is working fine from code is also returning 1.) However, the report is failing with an error: Error Code: 534 - Error detected by database DLL. The code, for debugging purposes dumps out the SQL string currently being used by the report. The SQL coming out of the report is: SELECT ... FROM ... WHERE ORDER BY ... As you can see, the WHERE clause is blank, which I would imagine is why the database DLL is upchucking on this statement. I don't understand why the automation library is not setting the WHERE clause even though the call to PESetSelectionFormula is being passed a valid string and is returning success. I thought perhaps it was because I had manually edited the SQL in the report to add the table it wasn't adding, but I did the same thing in the other nearly identical report, and that one is working fine. Anyone have any ideas why PESetSelectionFormula might report success but not actually do anything? P.S. I have already tried doing a Database Verify Database from the menus, and that said the report was all up to date and did not help at all.

    Read the article

  • Is it worth moving from stored procedures to linq ?

    - by Josef
    I'm looking at standardizing programming in an organisaiton. Half uses stored procedures and the other half Linq. From what i've read there is still some debate going on on this topic. My concern is that MS is trying to slip in it's own proprietry query language 'linq' to make SQL redundant. If a few years back microsoft had tried to win customers from oracle and sybase with their MSSQL database and stated that it didn't use SQL by their own proprietry query langues ie linq. I doubt many would have switched. I believe that is exactly what is happening now by introducting it into the applicaiton business layer. I have used MS for many years but there is one gripe that I have with them and that is that they change their direction a lot. By a lot I mean new releases of .net, silverlight etc are more than 30% different from previous version. So by the time you become productive a new release is on the way. As things stand now a web developer using .net would need to know either vb.net or c#, xml, xaml,javascript,html, sql and now linq. That doesn't make for good productivity in my books. My concern is that once we all start using linq MS will start changing it between releases. and it will become an ever changing landscape. I believe that 'linq to sql' has already been deprecated. At leas with SQL we are dealing with a more stable and standardized language. Are we looking at a programming revolution or a marketing campaign? As far as I know other languages like Cobol have stayed the same for years. A cobol program from 20 years ago could pick up todays code and start working on it. Could a Vb3 person work on a modern .net web app ? Would these large changes need to be made if the underlying original foundation had been sound ? I worry about following MS shaking roadmap with it's deadends and double backs. are there any architects out there who feel the same ? regards Josef

    Read the article

  • Count number of queries executed by NHibernate in a unit test

    - by Bittercoder
    In some unit/integration tests of the code we wish to check that correct usage of the second level cache is being employed by our code. Based on the code presented by Ayende here: http://ayende.com/Blog/archive/2006/09/07/MeasuringNHibernatesQueriesPerPage.aspx I wrote a simple class for doing just that: public class QueryCounter : IDisposable { CountToContextItemsAppender _appender; public int QueryCount { get { return _appender.Count; } } public void Dispose() { var logger = (Logger) LogManager.GetLogger("NHibernate.SQL").Logger; logger.RemoveAppender(_appender); } public static QueryCounter Start() { var logger = (Logger) LogManager.GetLogger("NHibernate.SQL").Logger; lock (logger) { foreach (IAppender existingAppender in logger.Appenders) { if (existingAppender is CountToContextItemsAppender) { var countAppender = (CountToContextItemsAppender) existingAppender; countAppender.Reset(); return new QueryCounter {_appender = (CountToContextItemsAppender) existingAppender}; } } var newAppender = new CountToContextItemsAppender(); logger.AddAppender(newAppender); logger.Level = Level.Debug; logger.Additivity = false; return new QueryCounter {_appender = newAppender}; } } public class CountToContextItemsAppender : IAppender { int _count; public int Count { get { return _count; } } public void Close() { } public void DoAppend(LoggingEvent loggingEvent) { if (string.Empty.Equals(loggingEvent.MessageObject)) return; _count++; } public string Name { get; set; } public void Reset() { _count = 0; } } } With intended usage: using (var counter = QueryCounter.Start()) { // ... do something Assert.Equal(1, counter.QueryCount); // check the query count matches our expectations } But it always returns 0 for Query count. No sql statements are being logged. However if I make use of Nhibernate Profiler and invoke this in my test case: NHibernateProfiler.Intialize() Where NHProf uses a similar approach to capture logging output from NHibernate for analysis via log4net etc. then my QueryCounter starts working. It looks like I'm missing something in my code to get log4net configured correctly for logging nhibernate sql ... does anyone have any pointers on what else I need to do to get sql logging output from Nhibernate?

    Read the article

  • Importing a large delimited file to a MySQL table

    - by Tom
    I have this large (and oddly formatted txt file) from the USDA's website. It is the NUT_DATA.txt file. But the problem is that it is almost 27mb! I was successful in importing the a few other smaller files, but my method was using file_get_contents which it makes sense why an error would be thrown if I try to snag 27+ mb of RAM. So how can I import this massive file to my MySQL DB without running into a timeout and RAM issue? I've tried just getting one line at a time from the file, but this ran into timeout issue. Using PHP 5.2.0. Here is the old script (the fields in the DB are just numbers because I could not figure out what number represented what nutrient, I found this data very poorly document. Sorry about the ugliness of the code): <? $file = "NUT_DATA.txt"; $data = split("\n", file_get_contents($file)); // split each line $link = mysql_connect("localhost", "username", "password"); mysql_select_db("database", $link); for($i = 0, $e = sizeof($data); $i < $e; $i++) { $sql = "INSERT INTO `USDA` (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17) VALUES("; $row = split("\^", trim($data[$i])); // split each line by carrot for ($j = 0, $k = sizeof($row); $j < $k; $j++) { $val = trim($row[$j], '~'); $val = (empty($val)) ? 0 : $val; $sql .= ((empty($val)) ? 0 : $val) . ','; // this gets rid of those tildas and replaces empty strings with 0s } $sql = rtrim($sql, ',') . ");"; mysql_query($sql) or die(mysql_error()); // query the db } echo "Finished inserting data into database.\n"; mysql_close($link); ?>

    Read the article

  • whats wrong with this php mysql_real_escape_string

    - by skyhigh
    Hi Atomic Number Latin English Abbreviation * check the variables for content */ /*** a list of filters ***/ $filters = array( 'searchtext' => array( 'filter' => FILTER_CALLBACK, 'options' => 'mysql_real_escape_string'), 'fieldname' => array( 'filter' => FILTER_CALLBACK, 'options' => 'mysql_real_escape_string') ); /*** escape all POST variables ***/ $input = filter_input_array(INPUT_POST, $filters); /*** check the values are not empty ***/ if(empty($input['fieldname']) || empty($input['searchtext'])) { echo 'Invalid search'; } else { /*** mysql hostname ***/ $hostname = 'localhost'; /*** mysql username ***/ $username = 'username'; /*** mysql password ***/ $password = 'password'; /*** mysql database name ***/ $dbname = 'periodic_table'; /*** connect to the database ***/ $link = @mysql_connect($hostname, $username, $password); /*** check if the link is a valid resource ***/ if(is_resource($link)) { /*** select the database we wish to use ***/ if(mysql_select_db($dbname, $link) === TRUE) { /*** sql to SELECT information***/ $sql = sprintf("SELECT * FROM elements WHERE %s = '%s'", $input['fieldname'], $input['searchtext']); /*** echo the sql query ***/ echo '<h3>'.$sql.'</h3>'; /*** run the query ***/ $result = mysql_query($sql); /*** check if the result is a valid resource ***/ if(is_resource($result)) { /*** check if we have more than zero rows ***/ if(mysql_num_rows($result) !== 0) { echo '<table>'; while($row=mysql_fetch_array($result)) { echo '<tr> <td>'.$row['atomicnumber'].'</td> <td>'.$row['latin'].'</td> <td>'.$row['english'].'</td> <td>'.$row['abbr'].'</td> </tr>'; } echo '</table>'; } else { /*** if zero results are found.. ***/ echo 'Zero results found'; } } else { /*** if the resource is not valid ***/ 'No valid resource found'; } } /*** if we are unable to select the database show an error ****/ else { echo 'Unable to select database '.$dbname; } /*** close the connection ***/ mysql_close($link); } else { /*** if we fail to connect ***/ echo 'Unable to connect'; } } } else { echo 'Please Choose An Element'; } ? I got this code from phppro.org tutorials site and i tried to run it. It gives Warning: mysql_real_escape_string() [function.mysql-real-escape-string]: A link to the server could not be established. .... Warning: mysql_real_escape_string() [function.mysql-real-escape-string]: Access denied for user 'ODBC'@'localhost' (using password: NO).... I went to php.net and look it up "Note: A MySQL connection is required before using mysql_real_escape_string() otherwise an error of level E_WARNING is generated, and FALSE is returned. If link_identifier isn't defined, the last MySQL connection is used." My questions are: 1-why they put single quotation around mysql_real_escape_string ? 2-They should establish a connection first, then use the $filter array statement with mysql_real_escape_string ?

    Read the article

  • Converting this code from ASP to PHP

    - by jethomas
    I'll admit I'm a novice programmer and really the only experience I have is in classic ASP. I'm looking for a way to convert this asp code to PHP. For a customer who only has access to a linux box but also as a learning tool for me. Thanks in advance for the help: Recordset and Function: Function pd(n, totalDigits) if totalDigits > len(n) then pd = String(totalDigits-len(n),"0") & n else pd = n end if End Function 'declare the variables Dim Connection Dim Recordset Dim SQL Dim SQLDate SQLDate = Year(Date)& "-" & pd(Month(Date()),2)& "-" & pd(Day(Date()),2) 'declare the SQL statement that will query the database SQL = "SELECT * FROM tblXYZ WHERE element_8 = 2 AND element_9 > '" & SQLDate &"'" 'create an instance of the ADO connection and recordset objects Set Connection = Server.CreateObject("ADODB.Connection") Set Recordset = Server.CreateObject("ADODB.Recordset") 'open the connection to the database Connection.Open "PROVIDER=MSDASQL;DRIVER={MySQL ODBC 5.1 Driver};SERVER=localhost;UID=xxxxx;PWD=xxxxx;database=xxxxx;Option=3;" 'Open the recordset object executing the SQL statement and return records Recordset.Open SQL,Connection Display page/loop: Dim counter counter = 0 While Not Recordset.EOF counter = counter + 1 response.write("<div><td width='200' valign='top' align='center'><a href='" & Recordset("element_6") & "' style='text-decoration: none;'><div id='ad_header'>" & Recordset("element_3") & "</div><div id='store_name' valign='bottom'>" & Recordset("element_5") & "</div><img id='photo-small-img' src='http://xyz.com/files/" & Recordset("element_7") & "' /><br /><div id='ad_details'>"& Recordset("element_4") & "</div></a></td></div>") Recordset.MoveNext If counter = 3 Then Response.Write("</tr><tr>") counter = 0 End If Wend

    Read the article

  • PHP uploads and checking

    - by user147685
    Hi all, I want to update/insert file by using upload box to the database but before that it will check for the file type thats only pdf can be upload. Somthing wrong with the codes and i dont know what..Please help here are part of my updates codes: $id=$rs['id']; $qry = "SELECT a.faillampiran FROM {$CFG->prefix}ptk_lampiran a, {$CFG->prefix}ptk b WHERE a.ptkid=$id and b.id=$id"; $sql = get_records_sql($qry); if($_POST['check']){ $ext = pathinfo($faillampiran,PATHINFO_EXTENSION); $err = "Upload Only PDF File. "; if ($ext =='pdf'){ $qry="UPDATE {$CFG->prefix}ptk_lampiran SET faillampiran='".$faillampiran."' WHERE ptkid='".$_GET['id']."'"; $sql=mysql_query($qry); $qry = "SELECT a.faillampiran FROM {$CFG->prefix}ptk_lampiran a, {$CFG->prefix}ptk b WHERE b.id = '".$rs[id]."' AND a.ptkid = '".$rs[id]."' "; $sql = get_records_sql($qry); foreach($sql as $rs){ ?> <?=basename($rs->faillampiran); ?><br> <? } ?> <tr><td></td><td></td><td> <input type="file" size="50" name="faillampiran" alt="faillampiran" value= "<?=$faillampiran;?>" /> <input type="submit" name="edit" value="Muat naik fail ini" /><br /> </td></tr> } else { echo "<script>alert('$err')</script>"; } } else { foreach($sql as $rs){ ?> <?=basename($rs->faillampiran); ?><br> <? } ?> <input type="file" size="50" name="faillampiran" alt="faillampiran" value= "<?=$faillampiran;?>" /> <input type="submit" name="edit" value="Muat naik fail ini" /><br /> <?} Sori bout the messiness of the codes, im still a beginner in this languages. thx

    Read the article

  • SSIS - user variable used in derived column transform is not available - in some cases

    - by soo
    Unfortunately I don't have a repro for my issue, but I thought I would try to describe it in case it sounds familiar to someone... I am using SSIS 2005, SP2. My package has a package-scope user variable - let's call it user_var first step in the control flow is an Execute SQL task which runs a stored procedure. All that SP does is insert a record in a SQL table (with an identity column) and then go back and get the max ID value. The Execute SQL task saves this output into user_var the control flow then has a Data Flow Task - it goes and gets some source data, has a derived column which sets a column called run_id to user_var - and saves the data to a SQL destination In most cases (this template is used for many packages, running every day) this all works great. All of the destination records created get set with a correct run_id. However, in some cases, there is a set of the destination data that does not get run_id equal to user_var, but instead gets a value of 0 (0 is the default value for user_var). I have 2 instances where this has happened, but I can't make it happen. In both cases, it was just less that 10,000 records that have run_id = 0. Since SSIS writes data out in 10,000 record blocks, this really makes me think that, for the first set of data written out, user_var was not yet set. Then, after that first block, for the rest of the data, run_id is set to a correct value. But control passed on to my data flow from the Execute SQL task - it would have seemed reasonable to me that it wouldn't go on until the SP has completed and user_var is set. Maybe it just runs the SP, but doesn't wait for it to complete? In both cases where this has happened there seemed to be a few packages hitting the table to get a new user_var at about the same time. And in both cases lots of data was written (40 million rows, 60 million rows) - my thinking is that that means the writes were happening for a while. Sorry to be both long-winded AND vague. A winning combination! Does this sound familiar to anyone? Thanks.

    Read the article

  • MERGE Bug with Filtered Indexes

    - by Paul White
    A MERGE statement can fail, and incorrectly report a unique key violation when: The target table uses a unique filtered index; and No key column of the filtered index is updated; and A column from the filtering condition is updated; and Transient key violations are possible Example Tables Say we have two tables, one that is the target of a MERGE statement, and another that contains updates to be applied to the target.  The target table contains three columns, an integer primary key, a single character alternate key, and a status code column.  A filtered unique index exists on the alternate key, but is only enforced where the status code is ‘a’: CREATE TABLE #Target ( pk integer NOT NULL, ak character(1) NOT NULL, status_code character(1) NOT NULL,   PRIMARY KEY (pk) );   CREATE UNIQUE INDEX uq1 ON #Target (ak) INCLUDE (status_code) WHERE status_code = 'a'; The changes table contains just an integer primary key (to identify the target row to change) and the new status code: CREATE TABLE #Changes ( pk integer NOT NULL, status_code character(1) NOT NULL,   PRIMARY KEY (pk) ); Sample Data The sample data for the example is: INSERT #Target (pk, ak, status_code) VALUES (1, 'A', 'a'), (2, 'B', 'a'), (3, 'C', 'a'), (4, 'A', 'd');   INSERT #Changes (pk, status_code) VALUES (1, 'd'), (4, 'a');          Target                     Changes +-----------------------+    +------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦ ¦  1 ¦ A  ¦ a           ¦    ¦  1 ¦ d           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+ ¦  4 ¦ A  ¦ d           ¦ +-----------------------+ The target table’s alternate key (ak) column is unique, for rows where status_code = ‘a’.  Applying the changes to the target will change row 1 from status ‘a’ to status ‘d’, and row 4 from status ‘d’ to status ‘a’.  The result of applying all the changes will still satisfy the filtered unique index, because the ‘A’ in row 1 will be deleted from the index and the ‘A’ in row 4 will be added. Merge Test One Let’s now execute a MERGE statement to apply the changes: MERGE #Target AS t USING #Changes AS c ON c.pk = t.pk WHEN MATCHED AND c.status_code <> t.status_code THEN UPDATE SET status_code = c.status_code; The MERGE changes the two target rows as expected.  The updated target table now contains: +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦ ¦  1 ¦ A  ¦ d           ¦ <—changed from ‘a’ ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ a           ¦ <—changed from ‘d’ +-----------------------+ Merge Test Two Now let’s repopulate the changes table to reverse the updates we just performed: TRUNCATE TABLE #Changes;   INSERT #Changes (pk, status_code) VALUES (1, 'a'), (4, 'd'); This will change row 1 back to status ‘a’ and row 4 back to status ‘d’.  As a reminder, the current state of the tables is:          Target                        Changes +-----------------------+    +------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦ ¦  1 ¦ A  ¦ d           ¦    ¦  1 ¦ a           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ d           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+ ¦  4 ¦ A  ¦ a           ¦ +-----------------------+ We execute the same MERGE statement: MERGE #Target AS t USING #Changes AS c ON c.pk = t.pk WHEN MATCHED AND c.status_code <> t.status_code THEN UPDATE SET status_code = c.status_code; However this time we receive the following message: Msg 2601, Level 14, State 1, Line 1 Cannot insert duplicate key row in object 'dbo.#Target' with unique index 'uq1'. The duplicate key value is (A). The statement has been terminated. Applying the changes using UPDATE Let’s now rewrite the MERGE to use UPDATE instead: UPDATE t SET status_code = c.status_code FROM #Target AS t JOIN #Changes AS c ON t.pk = c.pk WHERE c.status_code <> t.status_code; This query succeeds where the MERGE failed.  The two rows are updated as expected: +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦ ¦  1 ¦ A  ¦ a           ¦ <—changed back to ‘a’ ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ d           ¦ <—changed back to ‘d’ +-----------------------+ What went wrong with the MERGE? In this test, the MERGE query execution happens to apply the changes in the order of the ‘pk’ column. In test one, this was not a problem: row 1 is removed from the unique filtered index by changing status_code from ‘a’ to ‘d’ before row 4 is added.  At no point does the table contain two rows where ak = ‘A’ and status_code = ‘a’. In test two, however, the first change was to change row 1 from status ‘d’ to status ‘a’.  This change means there would be two rows in the filtered unique index where ak = ‘A’ (both row 1 and row 4 meet the index filtering criteria ‘status_code = a’). The storage engine does not allow the query processor to violate a unique key (unless IGNORE_DUP_KEY is ON, but that is a different story, and doesn’t apply to MERGE in any case).  This strict rule applies regardless of the fact that if all changes were applied, there would be no unique key violation (row 4 would eventually be changed from ‘a’ to ‘d’, removing it from the filtered unique index, and resolving the key violation). Why it went wrong The query optimizer usually detects when this sort of temporary uniqueness violation could occur, and builds a plan that avoids the issue.  I wrote about this a couple of years ago in my post Beware Sneaky Reads with Unique Indexes (you can read more about the details on pages 495-497 of Microsoft SQL Server 2008 Internals or in Craig Freedman’s blog post on maintaining unique indexes).  To summarize though, the optimizer introduces Split, Filter, Sort, and Collapse operators into the query plan to: Split each row update into delete followed by an inserts Filter out rows that would not change the index (due to the filter on the index, or a non-updating update) Sort the resulting stream by index key, with deletes before inserts Collapse delete/insert pairs on the same index key back into an update The effect of all this is that only net changes are applied to an index (as one or more insert, update, and/or delete operations).  In this case, the net effect is a single update of the filtered unique index: changing the row for ak = ‘A’ from pk = 4 to pk = 1.  In case that is less than 100% clear, let’s look at the operation in test two again:          Target                     Changes                   Result +-----------------------+    +------------------+    +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦    ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦    ¦----+----+-------------¦ ¦  1 ¦ A  ¦ d           ¦    ¦  1 ¦ d           ¦    ¦  1 ¦ A  ¦ a           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ a           ¦    ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+    ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ a           ¦                            ¦  4 ¦ A  ¦ d           ¦ +-----------------------+                            +-----------------------+ From the filtered index’s point of view (filtered for status_code = ‘a’ and shown in nonclustered index key order) the overall effect of the query is:   Before           After +---------+    +---------+ ¦ pk ¦ ak ¦    ¦ pk ¦ ak ¦ ¦----+----¦    ¦----+----¦ ¦  4 ¦ A  ¦    ¦  1 ¦ A  ¦ ¦  2 ¦ B  ¦    ¦  2 ¦ B  ¦ ¦  3 ¦ C  ¦    ¦  3 ¦ C  ¦ +---------+    +---------+ The single net change there is a change of pk from 4 to 1 for the nonclustered index entry ak = ‘A’.  This is the magic performed by the split, sort, and collapse.  Notice in particular how the original changes to the index key (on the ‘ak’ column) have been transformed into an update of a non-key column (pk is included in the nonclustered index).  By not updating any nonclustered index keys, we are guaranteed to avoid transient key violations. The Execution Plans The estimated MERGE execution plan that produces the incorrect key-violation error looks like this (click to enlarge in a new window): The successful UPDATE execution plan is (click to enlarge in a new window): The MERGE execution plan is a narrow (per-row) update.  The single Clustered Index Merge operator maintains both the clustered index and the filtered nonclustered index.  The UPDATE plan is a wide (per-index) update.  The clustered index is maintained first, then the Split, Filter, Sort, Collapse sequence is applied before the nonclustered index is separately maintained. There is always a wide update plan for any query that modifies the database. The narrow form is a performance optimization where the number of rows is expected to be relatively small, and is not available for all operations.  One of the operations that should disallow a narrow plan is maintaining a unique index where intermediate key violations could occur. Workarounds The MERGE can be made to work (producing a wide update plan with split, sort, and collapse) by: Adding all columns referenced in the filtered index’s WHERE clause to the index key (INCLUDE is not sufficient); or Executing the query with trace flag 8790 set e.g. OPTION (QUERYTRACEON 8790). Undocumented trace flag 8790 forces a wide update plan for any data-changing query (remember that a wide update plan is always possible).  Either change will produce a successfully-executing wide update plan for the MERGE that failed previously. Conclusion The optimizer fails to spot the possibility of transient unique key violations with MERGE under the conditions listed at the start of this post.  It incorrectly chooses a narrow plan for the MERGE, which cannot provide the protection of a split/sort/collapse sequence for the nonclustered index maintenance. The MERGE plan may fail at execution time depending on the order in which rows are processed, and the distribution of data in the database.  Worse, a previously solid MERGE query may suddenly start to fail unpredictably if a filtered unique index is added to the merge target table at any point. Connect bug filed here Tests performed on SQL Server 2012 SP1 CUI (build 11.0.3321) x64 Developer Edition © 2012 Paul White – All Rights Reserved Twitter: @SQL_Kiwi Email: [email protected]

    Read the article

  • Can't successfully run Sharepoint Foundation 2010 first time configuration

    - by Robert Koritnik
    I'm trying to run the non-GUI version of configuration wizard using power shell because I would like to set config and admin database names. GUI wizard doesn't give you all possible options for configuration. I run this command: New-SPConfigurationDatabase -DatabaseName "Sharepoint2010Config" -DatabaseServer "developer.pleiado.pri" -AdministrationContentDatabaseName "Sharepoint2010Admin" -DatabaseCredentials (Get-Credential) -Passphrase (ConvertTo-SecureString "%h4r3p0int" -AsPlainText -Force) Of course all these are in the same line. I've broken them down into separate lines to make it easier to read. When I run this command I get this error: New-SPConfigurationDatabase : Cannot connect to database master at SQL server a t developer.pleiado.pri. The database might not exist, or the current user does not have permission to connect to it. At line:1 char:28 + New-SPConfigurationDatabase <<<< -DatabaseName "Sharepoint2010Config" -Datab aseServer "developer.pleiado.pri" -AdministrationContentDatabaseName "Sharepoint 2010Admin" -DatabaseCredentials (Get-Credential) -Passphrase (ConvertTo-SecureS tring "%h4r3p0int" -AsPlainText -Force) + CategoryInfo : InvalidData: (Microsoft.Share...urationDatabase: SPCmdletNewSPConfigurationDatabase) [New-SPConfigurationDatabase], SPExcep tion + FullyQualifiedErrorId : Microsoft.SharePoint.PowerShell.SPCmdletNewSPCon figurationDatabase I created two domain accounts: SPF_DATABASE - database account SPF_ADMIN - farm account I'm running powershell console as domain administrator. I've tried to run SQL Management studio as domain admin and created a dummy database and it worked wothout a problem. I'm running: Windows 7 x64 on the machine where Sharepoint Foundation 2010 should be installed and also has preinstalled SQL Server 2008 R2 Windows Server 2008 R2 Server Core is my domain controller I've installed Sharepoint according to MS guides http://msdn.microsoft.com/en-us/library/ee554869%28office.14%29.aspx installing all additional patches that are related to my configuration. Any ideas what should I do to make it work?

    Read the article

  • Ubuntu: Move fsbackup backups to Amazon S3

    - by Alexander Gladysh
    I have a legacy server (Ubuntu 9.10 Karmic x86), where previous admin set up backups with fsbackup. This server lives in a VPS (under some kind of Xen), and it is low on HDD space (16 GB total). Now it came to a point, where fsbackup backups take more space than the rest of data in the system. The filesystem is 100% filled, and I already cleaned up all that I could, aside from actual backups. I do not have any experience managing fsbackup, and I do not want to break or lose the backups. Googling fsbackup gives surprisingly low quality results... Here is how my backups look like: $ sudo ls -lh /var/archives total 8.1G -rw-rw---- 1 root root 318 2011-01-06 06:26 myserver-20110106.md5 -rw-rw---- 1 root root 258 2011-01-07 06:26 myserver-20110107.md5 -rw-rw---- 1 root root 318 2011-01-08 06:26 myserver-20110108.md5 -rw-rw---- 1 root root 318 2011-01-09 06:26 myserver-20110109.md5 -rw-rw---- 1 root root 346 2011-01-10 06:43 myserver-20110110.md5 -rw-rw---- 1 root root 14M 2011-01-06 06:26 myserver-all-mysql-databases.20110106.sql.bz2 -rw-rw---- 1 root root 14M 2011-01-07 06:26 myserver-all-mysql-databases.20110107.sql.bz2 -rw-rw---- 1 root root 14M 2011-01-08 06:26 myserver-all-mysql-databases.20110108.sql.bz2 -rw-rw---- 1 root root 14M 2011-01-09 06:26 myserver-all-mysql-databases.20110109.sql.bz2 -rw-rw---- 1 root root 862 2011-01-10 06:43 myserver-all-mysql-databases.20110110.sql.bz2 -rw-rw---- 1 root root 827K 2011-01-03 06:25 myserver-etc.20110103.master.tar.gz -rw-rw---- 1 root root 16K 2011-01-06 06:25 myserver-etc.20110106.tar.gz -rw-rw---- 1 root root 16K 2011-01-07 06:25 myserver-etc.20110107.tar.gz -rw-rw---- 1 root root 16K 2011-01-08 06:25 myserver-etc.20110108.tar.gz -rw-rw---- 1 root root 16K 2011-01-09 06:25 myserver-etc.20110109.tar.gz -rw-rw---- 1 root root 827K 2011-01-10 06:25 myserver-etc.20110110.master.tar.gz -rw------- 1 root root 36K 2011-01-10 06:25 myserver-etc.incremental.bin -rw-rw---- 1 root root 29M 2011-01-03 06:25 myserver-home.20110103.master.tar.gz -rw-rw---- 1 root root 11K 2011-01-06 06:25 myserver-home.20110106.tar.gz -rw-rw---- 1 root root 14K 2011-01-07 06:25 myserver-home.20110107.tar.gz -rw-rw---- 1 root root 11K 2011-01-08 06:25 myserver-home.20110108.tar.gz -rw-rw---- 1 root root 11K 2011-01-09 06:25 myserver-home.20110109.tar.gz -rw-rw---- 1 root root 2.0M 2011-01-10 06:25 myserver-home.20110110.master.tar.gz -rw------- 1 root root 27K 2011-01-10 06:25 myserver-home.incremental.bin -rw-rw---- 1 root root 1.5G 2011-01-03 06:29 myserver-opt.20110103.master.tar.gz -rw-rw---- 1 root root 1.5M 2011-01-06 06:25 myserver-opt.20110106.tar.gz -rw-rw---- 1 root root 1.5M 2011-01-07 06:25 myserver-opt.20110107.tar.gz -rw-rw---- 1 root root 1.5M 2011-01-08 06:25 myserver-opt.20110108.tar.gz -rw-rw---- 1 root root 1.5M 2011-01-09 06:25 myserver-opt.20110109.tar.gz -rw-rw---- 1 root root 1.5G 2011-01-10 06:30 myserver-opt.20110110.master.tar.gz -rw------- 1 root root 201K 2011-01-10 06:30 myserver-opt.incremental.bin -rw-rw---- 1 root root 2.3G 2011-01-03 06:41 myserver-srv.20110103.master.tar.gz -rw-rw---- 1 root root 44M 2011-01-06 06:26 myserver-srv.20110106.tar.gz -rw-rw---- 1 root root 27M 2011-01-07 06:25 myserver-srv.20110107.tar.gz -rw-rw---- 1 root root 39M 2011-01-08 06:26 myserver-srv.20110108.tar.gz -rw-rw---- 1 root root 2.0M 2011-01-09 06:25 myserver-srv.20110109.tar.gz -rw-rw---- 1 root root 2.7G 2011-01-10 06:42 myserver-srv.20110110.master.tar.gz -rw------- 1 root root 3.4M 2011-01-10 06:42 myserver-srv.incremental.bin I'm thinking about moving backups to Amazon S3, but before that I have to free some space, so the server can work. Perhaps I can mount /var/archives to an Amazon S3 bucket somehow... Any advice?

    Read the article

  • Can't successfully run Sharepoint Foundation 2010 first time configuration

    - by Robert Koritnik
    I'm trying to run the non-GUI version of configuration wizard using power shell because I would like to set config and admin database names. GUI wizard doesn't give you all possible options for configuration (but even though it doesn't do it either). I run this command: New-SPConfigurationDatabase -DatabaseName "Sharepoint2010Config" -DatabaseServer "developer.mydomain.pri" -AdministrationContentDatabaseName "Sharepoint2010Admin" -DatabaseCredentials (Get-Credential) -Passphrase (ConvertTo-SecureString "%h4r3p0int" -AsPlainText -Force) Of course all these are in the same line. I've broken them down into separate lines to make it easier to read. When I run this command I get this error: New-SPConfigurationDatabase : Cannot connect to database master at SQL server a t developer.mydomain.pri. The database might not exist, or the current user does not have permission to connect to it. At line:1 char:28 + New-SPConfigurationDatabase <<<< -DatabaseName "Sharepoint2010Config" -Datab aseServer "developer.mydomain.pri" -AdministrationContentDatabaseName "Sharepoint 2010Admin" -DatabaseCredentials (Get-Credential) -Passphrase (ConvertTo-SecureS tring "%h4r3p0int" -AsPlainText -Force) + CategoryInfo : InvalidData: (Microsoft.Share...urationDatabase: SPCmdletNewSPConfigurationDatabase) [New-SPConfigurationDatabase], SPExcep tion + FullyQualifiedErrorId : Microsoft.SharePoint.PowerShell.SPCmdletNewSPCon figurationDatabase I created two domain accounts and haven't added them to any group: SPF_DATABASE - database account SPF_ADMIN - farm account I'm running powershell console as domain administrator. I've tried to run SQL Management studio as domain admin and created a dummy database and it worked without a problem. I'm running: Windows 7 x64 on the machine where Sharepoint Foundation 2010 should be installed and also has preinstalled SQL Server 2008 R2 database Windows Server 2008 R2 Server Core is my domain controller that just serves domain features and nothing else I've installed Sharepoint according to MS guides http://msdn.microsoft.com/en-us/library/ee554869%28office.14%29.aspx installing all additional patches that are related to my configuration. Any ideas what should I do to make it work?

    Read the article

  • Is the sysadmin/netadmin the defacto project planner at your organization?

    - by gft74
    At my company it has somehow over the past few years slowly become my job to come up with a project plan, milestones and time lines for deployment of developer applications. Typical scenario: My team receives a request for a new website/db combo and date for deployment. I send back a questionnaire for the developer to fill out on all the reqs for the site (ssl? db? growth projections etc.) After I get back all the information, the head of development wants a well developed document of what servers will it live on why those servers what is the time line for creating the resources step-by-step SOP for getting the application on the server and all related resources created (dns, firewall, load balancer etc.) I maybe just whining but it feels like this is something better suited to our Project Management staff (which we have) or to the developer. I understand that I need to give them a time-line on creating the resources, but still feel like this is overkill. We already produce documentation on where everything lives and track configuration changes to equipment. How do other sysadmin folks handle this?

    Read the article

  • Is the sysadmin/netadmin the defacto project planner at your organization?

    - by user31459
    At my company it has somehow over the past few years slowly become my job to come up with a project plan, milestones and time lines for deployment of developer applications. Typical scenario: My team receives a request for a new website/db combo and date for deployment. I send back a questionnaire for the developer to fill out on all the reqs for the site (ssl? db? growth projections etc.) After I get back all the information, the head of development wants a well developed document of what servers will it live on why those servers what is the time line for creating the resources step-by-step SOP for getting the application on the server and all related resources created (dns, firewall, load balancer etc.) I maybe just whining but it feels like this is something better suited to our Project Management staff (which we have) or to the developer. I understand that I need to give them a time-line on creating the resources, but still feel like this is overkill. We already produce documentation on where everything lives and track configuration changes to equipment. How do other sysadmin folks handle this?

    Read the article

  • Get the Windows Scheduler's Location Or Code?

    - by Ram
    Today i got a task to find the scheduler that is running once in a month, actually I have to change its running time period to once a week. I have searched for it in the Windows scheduled tasks, but i didn't find it. It is sending a mail containing a link. Now i am not confused about this where else can i find it. As the place i know where the scheduler can be, has already been checked by me. Can someone suggest me where else can i search for this scheduler? As per the previous developer's comment "It is a normal Windows method of scheduling tasks" UPDATE Actually the task is running after a month periodically. As per my knowledge it could be a "Windows Task Schedule" or "Windows Service" created by the old developer. Now as the previous developer is not available and i do not have any documentation.. i need to change the time period from month to weakly. I have checked in the "Task Schedules" on the server and checked the services running ob the server and was unable to find the "Scheduler". now i have two questions: is there any other approach by using that i can schedule an automated email periodically. any idea to find this.

    Read the article

  • Windows Azure Learning Plan - Security

    - by BuckWoody
    This is one in a series of posts on a Windows Azure Learning Plan. You can find the main post here. This one deals with Security for  Windows Azure.   General Security Information Overview and general  information about Windows Azure Security - what it is, how it works, and where you can learn more. General Security Whitepaper – answers most questions http://blogs.msdn.com/b/usisvde/archive/2010/08/10/security-white-paper-on-windows-azure-answers-many-faq.aspx Windows Azure Security Notes from the Patterns and Practices site http://blogs.msdn.com/b/jmeier/archive/2010/08/03/now-available-azure-security-notes-pdf.aspx Overview of Azure Security http://www.windowsecurity.com/articles/Microsoft-Azure-Security-Cloud.html Azure Security Resources http://reddevnews.com/articles/2010/08/19/microsoft-releases-windows-azure-security-resources.aspx Cloud Computing Security Considerations http://www.microsoft.com/downloads/en/details.aspx?FamilyID=68fedf9c-1c27-4642-aa5b-0a34472303ea&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+MicrosoftDownloadCenter+%28Microsoft+Download+Center Security in Cloud Computing – a Microsoft Perspective http://www.microsoft.com/downloads/en/details.aspx?FamilyID=7c8507e8-50ca-4693-aa5a-34b7c24f4579&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+MicrosoftDownloadCenter+%28Microsoft+Download+Center Physical Security for Microsoft’s Online Computing Information on the Infrastructure and Locations for Azure Physical Security. The Global Foundation Services Group at Microsoft handles physical security http://www.globalfoundationservices.com/security/index.html Microsoft’s Security Response Center http://www.microsoft.com/security/msrc/ Software Security for Microsoft’s Online Computing Steps we take as a company to develop secure software Windows Azure is developed using the Trustworthy Computing Initiative http://www.microsoft.com/about/twc/en/us/default.aspx and  http://msdn.microsoft.com/en-us/library/ms995349.aspx Identity and Access in the Cloud http://blogs.msdn.com/b/technology_titbits_by_rajesh_makhija/archive/2010/10/29/identity-and-access-in-the-cloud.aspx Security Steps you should take While Microsoft takes great pains to secure the infrastructure, platform and code for Windows Azure, you have a responsibility to write secure code. These pointers can help you do that. Securing your cloud architecture, step-by-step http://technet.microsoft.com/en-us/magazine/gg296364.aspx Security Guidelines for Windows Azure http://redmondmag.com/articles/2010/06/15/microsoft-issues-security-guidelines-for-windows-azure.aspx  Best Practices for Windows Azure Security http://blogs.msdn.com/b/vbertocci/archive/2010/06/14/security-best-practices-for-developing-windows-azure-applications.aspx Active Directory and Windows Azure http://blogs.msdn.com/b/plankytronixx/archive/2010/10/22/projecting-your-active-directory-identity-to-the-azure-cloud.aspx Understanding Encryption (great overview and tutorial) http://blogs.msdn.com/b/plankytronixx/archive/2010/10/23/crypto-primer-understanding-encryption-public-private-key-signatures-and-certificates.aspx Securing your Connection Strings (SQL Azure) http://blogs.msdn.com/b/sqlazure/archive/2010/09/07/10058942.aspx Getting started with Windows Identity Foundation (WIF) quickly http://blogs.msdn.com/b/alikl/archive/2010/10/26/windows-identity-foundation-wif-fast-track.aspx

    Read the article

  • Advanced Data Source Engine coming to Telerik Reporting Q1 2010

    This is the final blog post from the pre-release series. In it we are going to share with you some of the updates coming to our reporting solution in Q1 2010. A new Declarative Data Source Engine will be added to Telerik Reporting, that will allow full control over data management, and deliver significant gains in rendering performance and memory consumption. Some of the engines new features will be: Data source parameters - those parameters will be used to limit data retrieved from the data source to just the data needed for the report. Data source parameters are processed on the data source side, however only queried data is fetched to the reporting engine, rather than the full data source. This leads to lower memory consumption, because data operations are performed on queried data only, rather than on all data. As a result, only the queried data needs to be stored in the memory vs. the whole dataset, which was the case with the old approach Support for stored procedures - they will assist in achieving a consistent implementation of logic across applications, and are especially practical for performing repetitive tasks. A stored procedure stores the SQL statements and logic, which can then be executed in different reports and/or applications. Stored Procedures will not only save development time, but they will also improve performance, because each stored procedure is compiled on the data base server once, and then is reutilized. In Telerik Reporting, the stored procedure will also be parameterized, where elements of the SQL statement will be bound to parameters. These parameterized SQL queries will be handled through the data source parameters, and are evaluated at run time. Using parameterized SQL queries will improve the performance and decrease the memory footprint of your application, because they will be applied directly on the database server and only the necessary data will be downloaded on the middle tier or client machine; Calculated fields through expressions - with the help of the new reporting engine you will be able to use field values in formulas to come up with a calculated field. A calculated field is a user defined field that is computed "on the fly" and does not exist in the data source, but can perform calculations using the data of the data source object it belongs to. Calculated fields are very handy for adding frequently used formulas to your reports; Improved performance and optimized in-memory OLAP engine - the new data source will come with several improvements in how aggregates are calculated, and memory is managed. As a result, you may experience between 30% (for simpler reports) and 400% (for calculation-intensive reports) in rendering performance, and about 50% decrease in memory consumption. Full design time support through wizards - Declarative data sources are a great advance and will save developers countless hours of coding. In Q1 2010, and true to Telerik Reportings essence, using the new data source engine and its features requires little to no coding, because we have extended most of the wizards to support the new functionality. The newly extended wizards are available in VS2005/VS2008/VS2010 design-time. More features will be revealed on the product's what's new page when the new version is officially released in a few days. Also make sure you attend the free webinar on Thursday, March 11th that will be dedicated to the updates in Telerik Reporting Q1 2010. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Windows Azure Use Case: Fast Acquisitions

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Many organizations absorb, take over or merge with other organizations. In these cases, one of the most difficult parts of the process is the merging or changing of the IT systems that the employees use to do their work, process payments, and even get paid. Normally this means that the two companies have disparate systems, and several approaches can be used to have the two organizations use technology between them. An organization may choose to retain both systems, and manage them separately. The advantage here is speed, and keeping the profit/loss sheets separate. Another choice is to slowly “sunset” or stop using one organization’s system, and cutting to the other system immediately or at a later date. Although a popular choice, one of the most difficult methods is to extract data and processes from one system and import it into the other. Employees at the transitioning system have to be trained on the new one, the data must be examined and cleansed, and there is inevitable disruption when this happens. Still another option is to integrate the systems. This may prove to be as much work as a transitional strategy, but may have less impact on the users or the balance sheet. Implementation: A distributed computing paradigm can be a good strategic solution to most of these strategies. Retaining both systems is made more simple by allowing the users at the second organization immediate access to the new system, because security accounts can be created quickly inside an application. There is no need to set up a VPN or any other connections than just to the Internet. Having the users stop using one system and start with the other is also simple in Windows Azure for the same reason. Extracting data to Azure holds the same limitations as an on-premise system, and may even be more problematic because of the large data transfers that might be required. In a distributed environment, you pay for the data transfer, so a mixed migration strategy is not recommended. However, if the data is slowly migrated over time with a defined cutover, this can be an effective strategy. If done properly, an integration strategy works very well for a distributed computing environment like Windows Azure. If the Azure code is architected as a series of services, then endpoints can expose the service into and out of not only the Azure platform, but internally as well. This is a form of the Hybrid Application use-case documented here. References: Designing for Cloud Optimized Architecture: http://blogs.msdn.com/b/dachou/archive/2011/01/23/designing-for-cloud-optimized-architecture.aspx 5 Enterprise steps for adopting a Platform as a Service: http://blogs.msdn.com/b/davidmcg/archive/2010/12/02/5-enterprise-steps-for-adopting-a-platform-as-a-service.aspx?wa=wsignin1.0

    Read the article

  • How many people will be with you during 24HOP?

    - by Rob Farley
    In less than a week, SQLPASS hosts another 24 Hours of PASS event, this time with an array of 24 female speakers (in honour of this month being Women’s History Month). Interestingly, the committee has had a few people ask if there are rules about how the event can be viewed, such as “How many people from any one organisation can watch it?” or “Does it matter if a few people are crowded around the same screen?” From a licensing and marketing perspective, there is value in knowing how many people are watching the event, but there are no restrictions about how the thing is viewed. In fact – if you’re planning to watch any of these events, I want to suggest an idea: Book a meeting room in your office with a projector, and watch 24HOP in there. If you’re planning to have it streaming in the background while you work, obviously this makes life a bit trickier. But if you’re planning to treat it as a training event (a 2-day conference if you like) and block out a bit of time for it (as well you should – there’s going to be some great stuff in there), then why not do it in a way that makes it so that other people can see that you’re watching it, and potentially join you. When an event like this runs, we can see how many different ‘people’ are attending each LiveMeeting session. What we can’t tell is how many actual people there are represented. Jessica Moss spoke to the Adelaide SQL Server User Group a few weeks ago via LiveMeeting, and LiveMeeting told us there were less than a dozen people attending. Really there were at least three times that number, because all the people in the room with me weren’t included. I’d love to imagine that every LiveMeeting attendee represented a crowd in a room, watching a shared screen. So there’s my challenge – don’t let your LiveMeeting session represent just you. Find a way of involving other people. At the very least, you’ll be able to discuss it with them afterwards. Now stick a comment on this post to let me know how many people are going to be joining you. :) If you’re not registered for the event yet, get yourself over to the SQLPASS site and make it happen.

    Read the article

  • #MIX Day 2 Keynote: Put the Phone Down and Listen

    - by andrewbrust
    MIX day 1’s keynote was all about Windows Phone 7 (WP7).  MIX day 2’s was a reminder that Microsoft has much more going on than a new mobile platform.  Steven Sinofsky, Scott Guthrie, Doug Purdy and others showed us lots of other good things coming from Microsoft, mostly in the developer stack, that we certainly shouldn’t overlook.  These included the forthcoming IE9, its new JavaScript compiling engine and support for HTML 5 that takes full advantage of the local PC resources, including the Graphics Processing Unit.  The announcements also included important additions to ASP.NET (and one subtraction, in the form of lighter-weight ViewState technology) including almost-obsessive jQuery support.  That support is so good that John Resig, creator of the jQuery project, came on stage to tell us so.  Then Scott Guthrie told us that Microsoft would be contributing code to Open Source jQuery project. This is not your father’s Microsoft, it would seem. But to me, the crown jewel in today’s keynote were the numerous announcements around the Open Data Protocol (OData).  OData is nothing more than the protocol side of “Astoria” (now known as WCF Data Services, and until recently called ADO.NET Data Services) separated out and opened up as a platform-neutral standard.  The 2009 Professional Developers Conference (PDC) was Microsoft’s vehicle for first announcing OData, as well as project “Dallas,” an Azure-based cloud platform for publishing commercial OData feeds.  And we had already known about “bridges” for Astoria (and thus OData) for PHP and Java.  We also knew that PowerPivot, Microsoft’s forthcoming self-service BI plug-in for Excel 2010, will consume OData feeds and then facilitate drill-down analysis of their data.  And we recently found out that SQL Reporting Services reports (in the forthcoming SQL Server 2008 R2) and SharePoint 2010 lists will be consumable in OData format as well. So what was left to announce?  How about OData clients for Palm webOS and Apple iPhone/Objective C?  How about the release to Open Source of .NET’s OData client?  Or the ability to publish any SQL Azure database as an OData service by simply checking a checkbox at deployment?  Maybe even a Silverlight tool (code-named “Houston”) to create SQL Azure databases (and then publish them as OData) right in the browser?  And what if you you could get at NetFlix’s entire catalog in OData format?  You can – just go to http://odata.netflix.com/Catalog/ and see for yourself.  Douglas Purdy, who made these announcements said “we want OData to work on as many devices and platforms as possible.”  After all the cross-platform OData announcements made in about a half year’s time, it’s hard to dispute this. When Microsoft plays the data card, and plays it well, watch out, because data programmability is the company’s heritage.  I’ll be discussing OData at length in my April Redmond Review column.  I wrote that column two weeks ago, and was convinced then that OData was a big deal. Today upped the ante even more.  And following the Windows Phone 7 euphoria of yesterday was, I think, smart timing.  The phone, if it’s successful, will be because it’s a good developer platform play.  And developer platforms (as well as their creators) are most successful when they have a good data strategy.  OData is very Silverlight-friendly, and that means it’s WP7-friendly too.  Phone plus service-oriented data is a one-two punch.  A phone platform without data would have been a phone with no signal.

    Read the article

< Previous Page | 835 836 837 838 839 840 841 842 843 844 845 846  | Next Page >