Search Results

Search found 17870 results on 715 pages for 'screen resolution'.

Page 622/715 | < Previous Page | 618 619 620 621 622 623 624 625 626 627 628 629  | Next Page >

  • I can't get SetSystemTime to work in Windows Vista using C# with Interop (P/Invoke).

    - by Andrew
    Hi, I'm having a hard time getting SetSystemTime working in my C# code. SetSystemtime is a kernel32.dll function. I'm using P/invoke (interop) to call it. SetSystemtime returns false and the error is "Invalid Parameter". I've posted the code below. I stress that GetSystemTime works just fine. I've tested this on Vista and Windows 7. Based on some newsgroup postings I've seen I have turned off UAC. No difference. I have done some searching for this problem. I found this link: http://groups.google.com.tw/group/microsoft.public.dotnet.framework.interop/browse_thread/thread/805fa8603b00c267 where the problem is reported but no resolution seems to be found. Notice that UAC is also mentioned but I'm not sure this is the problem. Also notice that this gentleman gets no actual Win32Error. Can someone try my code on XP? Can someone tell me what I'm doing wrong and how to fix it. If the answer is to somehow change permission settings programatically, I'd need an example. I would have thought turning off UAC should cover that though. I'm not required to use this particular way (SetSystemTime). I'm just trying to introduce some "clock drift" to stress test something. If there's another way to do it, please tell me. Frankly, I'm surprised I need to use Interop to change the system time. I would have thought there is a .NET method. Thank you very much for any help or ideas. Andrew Code: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Runtime.InteropServices; namespace SystemTimeInteropTest { class Program { #region ClockDriftSetup [StructLayout(LayoutKind.Sequential)] public struct SystemTime { [MarshalAs(UnmanagedType.U2)] public short Year; [MarshalAs(UnmanagedType.U2)] public short Month; [MarshalAs(UnmanagedType.U2)] public short DayOfWeek; [MarshalAs(UnmanagedType.U2)] public short Day; [MarshalAs(UnmanagedType.U2)] public short Hour; [MarshalAs(UnmanagedType.U2)] public short Minute; [MarshalAs(UnmanagedType.U2)] public short Second; [MarshalAs(UnmanagedType.U2)] public short Milliseconds; } [DllImport("kernel32.dll")] public static extern void GetLocalTime( out SystemTime systemTime); [DllImport("kernel32.dll")] public static extern void GetSystemTime( out SystemTime systemTime); [DllImport("kernel32.dll", SetLastError = true)] public static extern bool SetSystemTime( ref SystemTime systemTime); //[DllImport("kernel32.dll", SetLastError = true)] //public static extern bool SetLocalTime( //ref SystemTime systemTime); [System.Runtime.InteropServices.DllImportAttribute("kernel32.dll", EntryPoint = "SetLocalTime")] [return: System.Runtime.InteropServices.MarshalAsAttribute(System.Runtime.InteropServices.UnmanagedType.Bool)] public static extern bool SetLocalTime([InAttribute()] ref SystemTime lpSystemTime); #endregion ClockDriftSetup static void Main(string[] args) { try { SystemTime sysTime; GetSystemTime(out sysTime); sysTime.Milliseconds += (short)80; sysTime.Second += (short)3000; bool bResult = SetSystemTime(ref sysTime); if (bResult == false) throw new System.ComponentModel.Win32Exception(); } catch (Exception ex) { Console.WriteLine("Drift Error: " + ex.Message); } } } }

    Read the article

  • How to save a large nhibernate collection without causing OutOfMemoryException

    - by Michael Hedgpeth
    How do I save a large collection with NHibernate which has elements that surpass the amount of memory allowed for the process? I am trying to save a Video object with nhibernate which has a large number of Screenshots (see below for code). Each Screenshot contains a byte[], so after nhibernate tries to save 10,000 or so records at once, an OutOfMemoryException is thrown. Normally I would try to break up the save and flush the session after every 500 or so records, but in this case, I need to save the collection because it automatically saves the SortOrder and VideoId for me (without the Screenshot having to know that it was a part of a Video). What is the best approach given my situation? Is there a way to break up this save without forcing the Screenshot to have knowledge of its parent Video? For your reference, here is the code from the simple sample I created: public class Video { public long Id { get; set; } public string Name { get; set; } public Video() { Screenshots = new ArrayList(); } public IList Screenshots { get; set; } } public class Screenshot { public long Id { get; set; } public byte[] Data { get; set; } } And mappings: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="SavingScreenshotsTrial" namespace="SavingScreenshotsTrial" default-access="property"> <class name="Screenshot" lazy="false"> <id name="Id" type="Int64"> <generator class="hilo"/> </id> <property name="Data" column="Data" type="BinaryBlob" length="2147483647" not-null="true" /> </class> </hibernate-mapping> <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="SavingScreenshotsTrial" namespace="SavingScreenshotsTrial" > <class name="Video" lazy="false" table="Video" discriminator-value="0" abstract="true"> <id name="Id" type="Int64" access="property"> <generator class="hilo"/> </id> <property name="Name" /> <list name="Screenshots" cascade="all-delete-orphan" lazy="false"> <key column="VideoId" /> <index column="SortOrder" /> <one-to-many class="Screenshot" /> </list> </class> </hibernate-mapping> When I try to save a Video with 10000 screenshots, it throws an OutOfMemoryException. Here is the code I'm using: using (var session = CreateSession()) { Video video = new Video(); for (int i = 0; i < 10000; i++) { video.Screenshots.Add(new Screenshot() {Data = camera.TakeScreenshot(resolution)}); } session.SaveOrUpdate(video); }

    Read the article

  • LINQ Generic Query with inherited base class?

    - by sah302
    I am trying to write some generic LINQ queries for my entities, but am having issue doing the more complex things. Right now I am using an EntityDao class that has all my generics and each of my object class Daos (such as Accomplishments Dao) inherit it, am example: using LCFVB.ObjectsNS; using LCFVB.EntityNS; namespace AccomplishmentNS { public class AccomplishmentDao : EntityDao<Accomplishment>{} } Now my entityDao has the following code: using LCFVB.ObjectsNS; using LCFVB.LinqDataContextNS; namespace EntityNS { public abstract class EntityDao<ImplementationType> where ImplementationType : Entity { public ImplementationType getOneByValueOfProperty(string getProperty, object getValue) { ImplementationType entity = null; if (getProperty != null && getValue != null) { //Nhibernate Example: //ImplementationType entity = default(ImplementationType); //entity = Me.session.CreateCriteria(Of ImplementationType)().Add(Expression.Eq(getProperty, getValue)).UniqueResult(Of InterfaceType)() LCFDataContext lcfdatacontext = new LCFDataContext(); //Generic LINQ Query Here lcfdatacontext.GetTable<ImplementationType>(); lcfdatacontext.SubmitChanges(); lcfdatacontext.Dispose(); } return entity; } public bool insertRow(ImplementationType entity) { if (entity != null) { //Nhibernate Example: //Me.session.Save(entity, entity.Id) //Me.session.Flush() LCFDataContext lcfdatacontext = new LCFDataContext(); //Generic LINQ Query Here lcfdatacontext.GetTable<ImplementationType>().InsertOnSubmit(entity); lcfdatacontext.SubmitChanges(); lcfdatacontext.Dispose(); return true; } else { return false; } } } }             I have gotten the insertRow function working, however I am not even sure how to go about doing getOnebyValueOfProperty, the closest thing I could find on this site was: http://stackoverflow.com/questions/2157560/generic-linq-to-sql-query How can I pass in the column name and the value I am checking against generically using my current set-up? It seems like from that link it's impossible since using a where predicate because entity class doesn't know what any of the properties are until I pass them in. Lastly, I need some way of setting a new object as the return type set to the implementation type, in nhibernate (what I am trying to convert from) it was simply this line that did it: ImplentationType entity = default(ImplentationType); However default is an nhibernate command, how would I do this for LINQ? EDIT: getOne doesn't seem to work even when just going off the base class (this is a partial class of the auto generated LINQ classes). I even removed the generics. I tried: namespace ObjectsNS { public partial class Accomplishment { public Accomplishment getOneByWhereClause(Expression<Action<Accomplishment, bool>> singleOrDefaultClause) { Accomplishment entity = new Accomplishment(); if (singleOrDefaultClause != null) { LCFDataContext lcfdatacontext = new LCFDataContext(); //Generic LINQ Query Here entity = lcfdatacontext.Accomplishments.SingleOrDefault(singleOrDefaultClause); lcfdatacontext.Dispose(); } return entity; } } } Get the following error: Error 1 Overload resolution failed because no accessible 'SingleOrDefault' can be called with these arguments: Extension method 'Public Function SingleOrDefault(predicate As System.Linq.Expressions.Expression(Of System.Func(Of Accomplishment, Boolean))) As Accomplishment' defined in 'System.Linq.Queryable': Value of type 'System.Action(Of System.Func(Of LCFVB.ObjectsNS.Accomplishment, Boolean))' cannot be converted to 'System.Linq.Expressions.Expression(Of System.Func(Of LCFVB.ObjectsNS.Accomplishment, Boolean))'. Extension method 'Public Function SingleOrDefault(predicate As System.Func(Of Accomplishment, Boolean)) As Accomplishment' defined in 'System.Linq.Enumerable': Value of type 'System.Action(Of System.Func(Of LCFVB.ObjectsNS.Accomplishment, Boolean))' cannot be converted to 'System.Func(Of LCFVB.ObjectsNS.Accomplishment, Boolean)'. 14 LCF Okay no problem I changed: public Accomplishment getOneByWhereClause(Expression<Action<Accomplishment, bool>> singleOrDefaultClause) to: public Accomplishment getOneByWhereClause(Expression<Func<Accomplishment, bool>> singleOrDefaultClause) Error goes away. Alright, but now when I try to call the method via: Accomplishment accomplishment = new Accomplishment(); var result = accomplishment.getOneByWhereClause(x=>x.Id = 4) It doesn't work it says x is not declared. I also tried getOne, and various other Expression =(

    Read the article

  • Need help to debug application using .Net and MySQL

    - by Tim Murphy
    What advice can you give me on how to track down a bug I have while inserting data into MySQL database with .Net? The error message is: MySql.Data.MySqlClient.MySqlException: Duplicate entry '26012' for key 'StockNumber_Number_UNIQUE' Reviewing of the log proves that StockNumber_Number of 26012 has not been inserted yet. Products in use. Visual Studio 2008. mysql.data.dll 6.0.4.0. Windows 7 Ultimate 64 bit and Windows 2003 32 bit. Custom built ORM framework (have source code). Importing data from Access 2003 database. The code works fine for 3000 - 5000 imports. The record being imported that causes the problem in a full run works fine if just importing by itself. I've also seen the error on other records if I sort the data to be imported a different way. Have tried import with and without transactions. Have logged the hell out of the system. The SQL command to create the table: CREATE TABLE `RareItems_RareItems` ( `RareItemKey` CHAR(36) NOT NULL PRIMARY KEY, `StockNumber_Text` VARCHAR(7) NOT NULL, `StockNumber_Number` INT NOT NULL AUTO_INCREMENT, UNIQUE INDEX `StockNumber_Number_UNIQUE` (`StockNumber_Number` ASC), `OurPercentage` NUMERIC , `SellPrice` NUMERIC(19, 2) , `Author` VARCHAR(250) , `CatchWord` VARCHAR(250) , `Title` TEXT , `Publisher` VARCHAR(250) , `InternalNote` VARCHAR(250) , `DateOfPublishing` VARCHAR(250) , `ExternalNote` LONGTEXT , `Description` LONGTEXT , `Scrap` LONGTEXT , `SuppressionKey` CHAR(36) NOT NULL, `TypeKey` CHAR(36) NOT NULL, `CatalogueStatusKey` CHAR(36) NOT NULL, `CatalogueRevisedDate` DATETIME , `CatalogueRevisedByKey` CHAR(36) NOT NULL, `CatalogueToBeRevisedByKey` CHAR(36) NOT NULL, `DontInsure` BIT NOT NULL, `ExtraCosts` NUMERIC(19, 2) , `IsWebReady` BIT NOT NULL, `LocationKey` CHAR(36) NOT NULL, `LanguageKey` CHAR(36) NOT NULL, `CatalogueDescription` VARCHAR(250) , `PlacePublished` VARCHAR(250) , `ToDo` LONGTEXT , `Headline` VARCHAR(250) , `DepartmentKey` CHAR(36) NOT NULL, `Temp1` INT , `Temp2` INT , `Temp3` VARCHAR(250) , `Temp4` VARCHAR(250) , `InternetStatusKey` CHAR(36) NOT NULL, `InternetStatusInfo` LONGTEXT , `PurchaseKey` CHAR(36) NOT NULL, `ConsignmentKey` CHAR(36) , `IsSold` BIT NOT NULL, `RowCreated` DATETIME NOT NULL, `RowModified` DATETIME NOT NULL ); The SQL command and parameters to insert the record: INSERT INTO `RareItems_RareItems` (`RareItemKey`, `StockNumber_Text`, `StockNumber_Number`, `OurPercentage`, `SellPrice`, `Author`, `CatchWord`, `Title`, `Publisher`, `InternalNote`, `DateOfPublishing`, `ExternalNote`, `Description`, `Scrap`, `SuppressionKey`, `TypeKey`, `CatalogueStatusKey`, `CatalogueRevisedDate`, `CatalogueRevisedByKey`, `CatalogueToBeRevisedByKey`, `DontInsure`, `ExtraCosts`, `IsWebReady`, `LocationKey`, `LanguageKey`, `CatalogueDescription`, `PlacePublished`, `ToDo`, `Headline`, `DepartmentKey`, `Temp1`, `Temp2`, `Temp3`, `Temp4`, `InternetStatusKey`, `InternetStatusInfo`, `PurchaseKey`, `ConsignmentKey`, `IsSold`, `RowCreated`, `RowModified`) VALUES (@RareItemKey, @StockNumber_Text, @StockNumber_Number, @OurPercentage, @SellPrice, @Author, @CatchWord, @Title, @Publisher, @InternalNote, @DateOfPublishing, @ExternalNote, @Description, @Scrap, @SuppressionKey, @TypeKey, @CatalogueStatusKey, @CatalogueRevisedDate, @CatalogueRevisedByKey, @CatalogueToBeRevisedByKey, @DontInsure, @ExtraCosts, @IsWebReady, @LocationKey, @LanguageKey, @CatalogueDescription, @PlacePublished, @ToDo, @Headline, @DepartmentKey, @Temp1, @Temp2, @Temp3, @Temp4, @InternetStatusKey, @InternetStatusInfo, @PurchaseKey, @ConsignmentKey, @IsSold, @RowCreated, @RowModified) @RareItemKey = 0b625bd6-776d-43d6-9405-e97159d172a6 @StockNumber_Text = 199305 @StockNumber_Number = 26012 @OurPercentage = 22.5 @SellPrice = 1250 @Author = SPARRMAN, Anders. @CatchWord = COOK: SECOND VOYAGE @Title = A Voyage Round the World with Captain James Cook in H.M.S. Resolution… Introduction and notes by Owen Rutter, wood engravings by Peter Barker-Mill. @Publisher = @InternalNote = @DateOfPublishing = 1944 @ExternalNote = The first English translation of Sparrman’s narrative, which had originally been published in Sweden in 1802-1818, and the only complete version of his account to appear in English. The eighteenth-century translation had appeared some time before the Swedish publication of the final sections of his account. Sparrman’s observant and well-written narrative of the second voyage contains much that appears nowhere else, emphasising naturally his interests in medicine, health, and natural history.&lt;br&gt;&lt;br&gt;One of 350 numbered copies: a handsomely produced and beautifully illustrated work. @Description = Small folio, wood-engravings in the text; original olive glazed cloth, top edges gilt, a very good copy. London, Golden Cockerel Press, 1944. @Scrap = @SuppressionKey = 00000000-0000-0000-0000-000000000000 @TypeKey = 93f58155-7471-46ad-84c5-262ab9dd37e8 @CatalogueStatusKey = 00000000-0000-0000-0000-000000000003 @CatalogueRevisedDate = @CatalogueRevisedByKey = c4f6fc06-956d-44c4-b393-0d5462cbffec @CatalogueToBeRevisedByKey = 00000000-0000-0000-0000-000000000000 @DontInsure = False @ExtraCosts = @IsWebReady = False @LocationKey = 00000000-0000-0000-0000-000000000000 @LanguageKey = 00000000-0000-0000-0000-000000000000 @CatalogueDescription = @PlacePublished = Golden Cockerel Press @ToDo = @Headline = @DepartmentKey = 529578a3-9189-40de-b656-eef9039d00b8 @Temp1 = @Temp2 = @Temp3 = @Temp4 = v @InternetStatusKey = 00000000-0000-0000-0000-000000000000 @InternetStatusInfo = @PurchaseKey = 00000000-0000-0000-0000-000000000000 @ConsignmentKey = @IsSold = True @RowCreated = 8/04/2010 8:49:16 PM @RowModified = 8/04/2010 8:49:16 PM Suggestions on what is causing the error and/or how to track down what is causing the problem?

    Read the article

  • Implementing ZBar QR Code Reader in UIView

    - by Bobster4300
    I really need help here. I'm pretty new to iOS/Objective-C so sorry if the problem resolution is obvious or if my code is terrible. Be easy on me!! :-) I'm struggling to integrate ZBarSDK for reading QR Codes into an iPad app i'm building. If I use ZBarReaderController (of which there are plenty of tutorials and guides on implementing), it works fine. However I want to make the camera come up in a UIView as opposed to the fullscreen camera. Now I have gotten as far as making the camera view (readerView) come up in the UIView (ZBarReaderView) as expected, but I get an error when it scans a code. The error does not come up until a code is scanned making me believe this is either delegate related or something else. Here's the important parts of my code: (ZBarSDK.h is imported at the PCH file) SignInViewController.h #import <UIKit/UIKit.h> #import <AVFoundation/AVFoundation.h> @class AVCaptureSession, AVCaptureDevice; @interface SignInViewController : UIViewController < ZBarReaderDelegate > { ZBarReaderView *readerView; UITextView *resultText; } @property (nonatomic, retain) UIImagePickerController *imgPicker; @property (strong, nonatomic) IBOutlet UITextView *resultText; @property (strong, nonatomic) IBOutlet ZBarReaderView *readerView; -(IBAction)StartScan:(id) sender; SignInViewController.m #import "SignInViewController.h" @interface SignInViewController () @end @implementation SignInViewController @synthesize resultText, readerView; -(IBAction)StartScan:(id) sender { readerView = [ZBarReaderView new]; readerView.readerDelegate = self; readerView.tracksSymbols = NO; readerView.frame = CGRectMake(30,70,230,230); readerView.torchMode = 0; readerView.device = [self frontFacingCameraIfAvailable]; ZBarImageScanner *scanner = readerView.scanner; [scanner setSymbology: ZBAR_I25 config: ZBAR_CFG_ENABLE to: 0]; [self relocateReaderPopover:[self interfaceOrientation]]; [readerView start]; [self.view addSubview: readerView]; resultText.hidden=NO; } - (void) readerControllerDidFailToRead: (ZBarReaderController*) reader withRetry: (BOOL) retry{ NSLog(@"the image picker failing to read"); } - (void) imagePickerController: (UIImagePickerController*) reader didFinishPickingMediaWithInfo: (NSDictionary*) info { NSLog(@"the image picker is calling successfully %@",info); id<NSFastEnumeration> results = [info objectForKey: ZBarReaderControllerResults]; ZBarSymbol *symbol = nil; NSString *hiddenData; for(symbol in results) hiddenData=[NSString stringWithString:symbol.data]; NSLog(@"the symbols is the following %@",symbol.data); resultText.text=symbol.data; NSLog(@"BARCODE= %@",symbol.data); NSLog(@"SYMBOL : %@",hiddenData); resultText.text=hiddenData; } The error I get when a code is scanned: 2012-12-16 14:28:32.797 QRTestApp[7970:907] -[SignInViewController readerView:didReadSymbols:fromImage:]: unrecognized selector sent to instance 0x1e88b1c0 2012-12-16 14:28:32.799 QRTestApp[7970:907] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[SignInViewController readerView:didReadSymbols:fromImage:]: unrecognized selector sent to instance 0x1e88b1c0' I'm not too worried about what happens with the results just yet, just want to get over this error. Took me ages just to get the camera to come up in the UIView due to severe lack of tutorial or documentation on ZBarReaderView (for beginners anyway). Thanks all.

    Read the article

  • Are free operator->* overloads evil?

    - by Potatoswatter
    I was perusing section 13.5 after refuting the notion that built-in operators do not participate in overload resolution, and noticed that there is no section on operator->*. It is just a generic binary operator. Its brethren, operator->, operator*, and operator[], are all required to be non-static member functions. This precludes definition of a free function overload to an operator commonly used to obtain a reference from an object. But the uncommon operator->* is left out. In particular, operator[] has many similarities. It is binary (they missed a golden opportunity to make it n-ary), and it accepts some kind of container on the left and some kind of locator on the right. Its special-rules section, 13.5.5, doesn't seem to have any actual effect except to outlaw free functions. (And that restriction even precludes support for commutativity!) So, for example, this is perfectly legal (in C++0x, remove obvious stuff to translate to C++03): #include <utility> #include <iostream> #include <type_traits> using namespace std; template< class F, class S > typename common_type< F,S >::type operator->*( pair<F,S> const &l, bool r ) { return r? l.second : l.first; } template< class T > T & operator->*( pair<T,T> &l, bool r ) { return r? l.second : l.first; } template< class T > T & operator->*( bool l, pair<T,T> &r ) { return l? r.second : r.first; } int main() { auto x = make_pair( 1, 2.3 ); cerr << x->*false << " " << x->*4 << endl; auto y = make_pair( 5, 6 ); y->*(0) = 7; y->*0->*y = 8; // evaluates to 7->*y = y.second cerr << y.first << " " << y.second << endl; } I can certainly imagine myself giving into temp[la]tation. For example, scaled indexes for vector: v->*matrix_width[2][5] = x; Did the standards committee forget to prevent this, was it considered too ugly to bother, or are there real-world use cases?

    Read the article

  • Resolve naming conflict in included XSDs for JAXB compilation

    - by Jason Faust
    I am currently trying to compile with JAXB (IBM build 2.1.3) a pair of schema files into the same package. Each will compile on it's own, but when trying to compile them together i get a element naming conflict due to includes. My question is; is there a way to specify with an external binding a resolution to the naming collision. Example files follow. In the example the offending element is called "Common", which is defined in both incA and incB: incA.xsd <?xml version="1.0" encoding="UTF-8"?> <schema xmlns="http://www.w3.org/2001/XMLSchema" targetNamespace="http://www.example.org/" xmlns:tns="http://www.example.org/" elementFormDefault="qualified"> <complexType name="TypeA"> <sequence> <element name="ElementA" type="string"></element> </sequence> </complexType> <!-- Conflicting element --> <element name="Common" type="tns:TypeA"></element> </schema> incB.xsd <?xml version="1.0" encoding="UTF-8"?> <schema xmlns="http://www.w3.org/2001/XMLSchema" targetNamespace="http://www.example.org/" xmlns:tns="http://www.example.org/" elementFormDefault="qualified"> <complexType name="TypeB"> <sequence> <element name="ElementB" type="int"></element> </sequence> </complexType> <!-- Conflicting element --> <element name="Common" type="tns:TypeB"></element> </schema> A.xsd <?xml version="1.0" encoding="UTF-8"?> <schema targetNamespace="http://www.example.org/" elementFormDefault="qualified" xmlns="http://www.w3.org/2001/XMLSchema" xmlns:tns="http://www.example.org/"> <include schemaLocation="incA.xsd"></include> <complexType name="A"> <sequence> <element ref="tns:Common"></element> </sequence> </complexType> </schema> B.xsd <?xml version="1.0" encoding="UTF-8"?> <schema targetNamespace="http://www.example.org/" elementFormDefault="qualified" xmlns="http://www.w3.org/2001/XMLSchema" xmlns:tns="http://www.example.org/"> <include schemaLocation="incB.xsd"></include> <complexType name="B"> <sequence> <element ref="tns:Common"></element> </sequence> </complexType> </schema> Compiler error when both are compiled from one evocation of xjb: [ERROR] 'Common' is already defined line 9 of file:/C:/temp/incB.xsd [ERROR] (related to above error) the first definition appears here line 9 of file:/C:/temp/incA.xsd (For reference, this is a generalization to resolve an issue with compiling the OAGIS8 SP3 package)

    Read the article

  • Looping through siblings of a specific row/setting up function

    - by Matt
    Trying to work through my javascript book I am referencing to learn the language and got stuck on looping through siblings of a specific row. W3schools and W3 didnt have what i was looking for. Below is a function walk-through... It reads: Create the countRecords() function. The purpose of this function is to count the number of visible rows in the data table after the table headings. The total is then displayed in the table cell with the id "records". Add the follow commands to the function: a. Create a object named headRow that points to the table row with the id "titleRow". Create a variable named rowCount, setting its initial value to 0. b. Create a for loop that uses familial references starting with the first sibling of headRow and moving to the next sibling until there are no siblings left. Within the for loop. test whether the node name of the currentnext sibling until there are no sibilings left. Within the for loop test whether the node name of the current node is equal to "TR". If it is, test wheter the value of its display style is equal to an empty text string. If it is (indicating that it is visible in the document) increate the value of the rowCount variable by 1. c. Change the text of the "records" table cell to the value of the rowCount variable. Don't use innerHTML. Create a text node that contains the value of the rowCount variable and assign it to a variable called txt. Create a variable called record to store the reference to the element "records" table cell. d. Insert an if condition that test whether the "records" cell has any child nodes. If it does, replace the replace the text node of the "record" table cell with the created text node (txt). If it doesn't append the text node to the cell. var headRow; // part a var rowCount = 0; //part b this is where I get lost. I know I need to access the id titleRow but unsure how to set my loop up specifically for this headRow = document.getElementById("titleRow"); for(var i=0; i<headrow.length; i++) { if (something is not equal == "TH") { make code happen here } if (is "TR" == ""){ rowCount = +1; } //part c var txt = document.createTextNode(rowCount); var record = document.getElementsById("records") //part d holding off on this part until I get a,b,c figured out. The HTML supporting snippet: <table id="filters"> <tr><th colspan="2">Filter Product List</th></tr> <tr> <td>Records: </td> <td id="records"></td> </tr> <table id="prodTable"> <tr><th colspan="8">Digital Cameras</th></tr> <tr id="titleRow"> <th>Model</th> <th>Manufacturer</th> <th>Resolution</th> <th>Zoom</th> <th>Media</th> <th>Video</th> <th>Microphone</th> </tr> Thanks for the help!

    Read the article

  • globals and locals in python exec()

    - by hawkettc
    Hi, I'm trying to run a piece of python code using exec. my_code = """ class A(object): pass print 'locals: %s' % locals() print 'A: %s' % A class B(object): a_ref = A """ global_env = {} local_env = {} my_code_AST = compile(my_code, "My Code", "exec") exec(my_code_AST, global_env, local_env) print local_env which results in the following output locals: {'A': <class 'A'>} A: <class 'A'> Traceback (most recent call last): File "python_test.py", line 16, in <module> exec(my_code_AST, global_env, local_env) File "My Code", line 8, in <module> File "My Code", line 9, in B NameError: name 'A' is not defined However, if I change the code to this - my_code = """ class A(object): pass print 'locals: %s' % locals() print 'A: %s' % A class B(A): pass """ global_env = {} local_env = {} my_code_AST = compile(my_code, "My Code", "exec") exec(my_code_AST, global_env, local_env) print local_env then it works fine - giving the following output - locals: {'A': <class 'A'>} A: <class 'A'> {'A': <class 'A'>, 'B': <class 'B'>} Clearly A is present and accessible - what's going wrong in the first piece of code? I'm using 2.6.5, cheers, Colin * UPDATE 1 * If I check the locals() inside the class - my_code = """ class A(object): pass print 'locals: %s' % locals() print 'A: %s' % A class B(object): print locals() a_ref = A """ global_env = {} local_env = {} my_code_AST = compile(my_code, "My Code", "exec") exec(my_code_AST, global_env, local_env) print local_env Then it becomes clear that locals() is not the same in both places - locals: {'A': <class 'A'>} A: <class 'A'> {'__module__': '__builtin__'} Traceback (most recent call last): File "python_test.py", line 16, in <module> exec(my_code_AST, global_env, local_env) File "My Code", line 8, in <module> File "My Code", line 10, in B NameError: name 'A' is not defined However, if I do this, there is no problem - def f(): class A(object): pass class B(object): a_ref = A f() print 'Finished OK' * UPDATE 2 * ok, so the docs here - http://docs.python.org/reference/executionmodel.html 'A class definition is an executable statement that may use and define names. These references follow the normal rules for name resolution. The namespace of the class definition becomes the attribute dictionary of the class. Names defined at the class scope are not visible in methods.' It seems to me that 'A' should be made available as a free variable within the executable statement that is the definition of B, and this happens when we call f(), but not when we use exec(). This can be more easily shown with the following - my_code = """ class A(object): pass print 'locals in body: %s' % locals() print 'A: %s' % A def f(): print 'A in f: %s' % A f() class B(object): a_ref = A """ which outputs locals in body: {'A': <class 'A'>} A: <class 'A'> Traceback (most recent call last): File "python_test.py", line 20, in <module> exec(my_code_AST, global_env, local_env) File "My Code", line 11, in <module> File "My Code", line 9, in f NameError: global name 'A' is not defined So I guess the new question is - why aren't those locals being exposed as free variables in functions and class definitions - it seems like a pretty standard closure scenario.

    Read the article

  • how to gzip-compress large Ajax responses (HTML only) in Coldfusion?

    - by frequent
    I'm running Coldfusion8 and jquery/jquery-mobile on the front-end. I'm playing around with an Ajax powered search engine trying to find the best tradeoff between data-volume and client-side processing time. Currently my AJAX search returns 40k of (JQM-enhanced markup), which avoids any client-side enhancement. This way I'm getting by without the page stalling for about 2-3 seconds, while JQM enhances all elements in the search results. What I'm curious is whether I can gzip Ajax responses sent from Coldfusion. If I check the header of my search right now, I'm having this: RESPONSE-header Connection Keep-Alive Content-Type text/html; charset=UTF-8 Date Sat, 01 Sep 2012 08:47:07 GMT Keep-Alive timeout=5, max=95 Server Apache/2.2.21 (Win32) mod_ssl/2.2.21 ... Transfer-Encoding chunked REQUEST-header Accept */* Accept-Encoding gzip, deflate Accept-Language de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Connection keep-alive Cookie CFID= ; CFTOKEN= ; resolution=1143 Host www.host.com Referer http://www.host.com/dev/users/index.cfm So, my request would accept gzip, deflate, but I'm getting back chunked. I'm generating the AJAX response in a cfsavecontent (called compressedHTML) and run this to eliminate whitespace <cfrscipt> compressedHTML = reReplace(renderedResults, "\>\s+\<", "> <", "ALL"); compressedHTML = reReplace(compressedHTML, "\s{2,}", chr(13), "ALL"); compressedHTML = reReplace(compressedHTML, "\s{2,}", chr(09), "ALL"); </cfscript> before sending the compressedHTML in a response object like this: {"SUCCESS":true,"DATA": compressedHTML } Question If I know I'm sending back HTML in my data object via Ajax, is there a way to gzip the response server-side before returning it vs sending chunked? If this is at all possible? If so, can I do this inside my response object or would I have to send back "pure" HTML? Thanks! EDIT: Found this on setting a 'web.config' for dynamic compression - doesn't seem to work EDIT2: Found thi snippet and am playing with it, although I'm not sure this will work. <cfscript> compressedHTML = reReplace(renderedResults, "\>\s+\<", "> <", "ALL"); compressedHTML = reReplace(compressedHTML, "\s{2,}", chr(13), "ALL"); compressedHTML = reReplace(compressedHTML, "\s{2,}", chr(09), "ALL"); if ( cgi.HTTP_ACCEPT_ENCODING contains "gzip" AND not showRaw ){ cfheader name="Content-Encoding" value="gzip"; bos = createObject("java","java.io.ByteArrayOutputStream").init(); gzipStream = createObject("java","java.util.zip.GZIPOutputStream"); gzipStream.init(bos); gzipStream.write(compressedHTML.getBytes("utf-8")); gzipStream.close(); bos.flush(); bos.close(); encoder = createObject("java","sun.misc. outStr= encoder.encode(bos.toByteArray()); compressedHTML = toString(bos.toByteArray()); } </cfscript> Probably need to try this on the response object and not the compressedTHML variable

    Read the article

  • Are there supposed to be more restrictions on operator->* overloads?

    - by Potatoswatter
    I was perusing section 13.5 after refuting the notion that built-in operators do not participate in overload resolution, and noticed that there is no section on operator->*. It is just a generic binary operator. Its brethren, operator->, operator*, and operator[], are all required to be non-static member functions. This precludes definition of a free function overload to an operator commonly used to obtain a reference from an object. But the uncommon operator->* is left out. In particular, operator[] has many similarities. It is binary (they missed a golden opportunity to make it n-ary), and it accepts some kind of container on the left and some kind of locator on the right. Its special-rules section, 13.5.5, doesn't seem to have any actual effect except to outlaw free functions. (And that restriction even precludes support for commutativity!) So, for example, this is perfectly legal (in C++0x, remove obvious stuff to translate to C++03): #include <utility> #include <iostream> #include <type_traits> using namespace std; template< class F, class S > typename common_type< F,S >::type operator->*( pair<F,S> const &l, bool r ) { return r? l.second : l.first; } template< class T > T & operator->*( pair<T,T> &l, bool r ) { return r? l.second : l.first; } template< class T > T & operator->*( bool l, pair<T,T> &r ) { return l? r.second : r.first; } int main() { auto x = make_pair( 1, 2.3 ); cerr << x->*false << " " << x->*4 << endl; auto y = make_pair( 5, 6 ); y->*(0) = 7; y->*0->*y = 8; // evaluates to 7->*y = y.second cerr << y.first << " " << y.second << endl; } I can certainly imagine myself giving into temp[la]tation. For example, scaled indexes for vector: v->*matrix_width[5] = x; Did the standards committee forget to prevent this, was it considered too ugly to bother, or are there real-world use cases?

    Read the article

  • using ajax to post comments in cakephp results in 404 error, but no errors locally?

    - by Paul
    Using an ajax helper created for use with Jquery I've created a form that posts comments to "/comments/add", and this works as expected on my local wamp server but not on my production server. On my production server I get a '404 error, cannot find /comments/add'. I've spent quite a bit of time searching for a resolution with no luck so far. I've focused on trying to identify a gap but nothing jumps out at me. Here are some observations: works as expected on local wamp server requestHandler is listed as component files on both local and production server are the same controllers folder has write access autoLayout and autoRender are both set to false Here is the form in my view: <div class="comments form"> <?php echo $ajax->form('/comments/add', 'tournament', array('url' => '/comments/add', 'update' => 'Comments', 'indicator' => 'commentSaved'));?> <fieldset> <legend><?php __('Add Comment');?></legend> <div id="commentSaved" style="display: none; float: right;"> <h2>Loading...</h2> </div> <?php echo $form->hidden('Comment.foreign_id', array('value' => $tournament['Tournament']['id'])); echo $form->hidden('Comment.belongs_to', array('value' => 'Tournament')); echo $form->input('Comment.name'); echo $form->input('Comment.email'); echo $form->input('Comment.web', array('value' => 'http://')); echo $form->input('Comment.content'); ?> </fieldset> <?php echo $form->end('Submit');?> </div> And here is my 'comment's controller add action: function add() { if($this->RequestHandler->isAjax()) { $this->autoLayout = false; $this->autoRender=false; $this->Comment->recursive =-1; $commentInfos = $this->Comment->findAllByIp($_SERVER['REMOTE_ADDR']); $spam = FALSE; foreach($commentInfos as $commentInfo) { if ( time() - strtotime($commentInfo['Comment']['created']) < 180) { $spam = TRUE; } } if ($spam === FALSE) { if (!empty($this->data)) { $this->data['Comment']['ip'] = $_SERVER['REMOTE_ADDR']; $this->Comment->create(); if ($this->Comment->save($this->data)) { $this->Comment->recursive =-1; $comments = $this->Comment->findAll(array('Comment.foreign_id' => $this->data['Comment']['foreign_id'], 'Comment.belongs_to' => $this->data['Comment']['belongs_to'], 'Comment.status' =>'approved')); $this->set(compact('comments')); $this->viewPath = 'elements'.DS.'posts'; $this->render('comments'); } } } else { $this->Comment->recursive =-1; $comments = $this->Comment->findAll(array('Comment.foreign_id' => $this->data['Comment']['foreign_id'], 'Comment.belongs_to' => $this->data['Comment']['belongs_to'], 'Comment.status' =>'approved')); $this->set(compact('comments')); $this->viewPath = 'elements'.DS.'posts'; $this->render('spam'); } } else { $this->Session->setFlash(__('Invalid Action. Please view a post to add a comment.', true)); $this->redirect(array('controller' => 'pages', 'action'=>'display', 'home')); } } As you can see I've made sure that 'autoLayout' and 'autoRender' are set to false, but in Firebug I still get a 404 error stating /comments/add cannot be found on the production server. I should also point out that I'm using jquery, jquery.form and jquery.editable as well as a ajax helper created to be used with jquery. Any help or even suggestions about how to trouble shoot this is really appreciated. Cheers, Paul

    Read the article

  • Help to argue why to develop software on a physical computer rather than via a remote desktop

    - by s5804
    Remote desktops are great and many times a blessing and cost effective (instead of leasing expensive cables). I am not arguing against remote desktops, just if one have the alternative to use either remote desktop or physical computer, I would choose the later. Also note that I am not arguing for or against remote work practices. But in my case I am required to be physically present in the office when developing software. Background, I work in a company which main business is not to develop software. Therefore the company IT policies are mainly focused on security and to efficiently deploying/maintaing thousands of computer to users. Further, the typical employee runs typical Office applications, like a word processors. Because safety/stability is such a big priority, every non production system/application, shall be deployed into a physical different network, called the test network. Software development of course also belongs in the test network. To access the test network the company has created a standard policy, which dictates that access to the test network shall go only via a remote desktop client. Practically from ones production computer one would open up a remote desktop client to a virtual computer located in the test network. On the virtual computer's remote desktop one would be able to access/run/install all development tools, like Eclipse IDE. Another solution would be to have a dedicated physical computer, which is physically only connected to the test network. Both solutions are available in the company. I have tested both approaches and found running Eclipse IDE, SQL developer, in the remote desktop client to be sluggish (keyboard strokes are delayed), commands like alt-tab takes me out of the remote client, enjoying... Further, screen resolution and colors are different, just to mention a few. Therefore there is nothing technical wrong with the remote client, just not optimal and frankly de-motivating. Now with the new policies put in place, plans are to remove the physical computers connected to the test network. I am looking for help to argue for why software developers shall have a dedicated physical software development computer, to be productive and cost effective. Remember that we are physically in office. Further one can notice that we are talking about approx. 50 computers out of 2000 employees. Therefore the extra budget is relatively small. This is more about policy than cost. Please note that there are lots of similar setups in other companies that work great due to a perfectly tuned systems. However, in my case it is sluggish and it would cost more money to trouble shoot the performance and fine tune it rather than to have a few physical computers. As a business case we have argued that productivity will go down by 25%, however it's my feeling that the reality is probably closer to 50%. This business case isn't really accepted and I find it very difficult to defend it to managers that has never ever used a rich IDE in their life, never mind developed software. Further the test network and remote client has no guaranteed service level, therefore it is down for a few hours per month with the lowest priority on the fix list. Help is appreciated.

    Read the article

  • Centering contentplaceholders

    - by gh9
    I have a contentplaceholder which needs to be centered (as close to absolute as possible) to the center of the page. The issue I am having is when, the browser moves to different resolutions the centering is thrown off. I have tried using divs and tables. I have switched out the precent with em units, and still no workie. Any help would be greatly appreciated. My end goal is to have the contentplaceholder always be in the center of page regardless of the resolution of the monitor <div style="width:20%"/> <div style="width:60"> contentplaceholercode </div> <div style="width:20%"/> <table> <tr > <td style="width:20%"> </td> <td style="width:60> </td> <td style="width:20%"> </td> </tr> </table> code for master and code for content page <%@ Master Language="C#" AutoEventWireup="true" CodeBehind="Master.master.cs" Inherits="WorkRecordr.Master" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <script src="Assets/Scripts/Jqeury1.4.4.js" type="text/javascript"></script> <title></title> <style type="text/css"> body { background-color: #326598; } .outer { width: 100%; border: solid 1px gray; padding: 1px; } .inner { border: solid 1px gray; width: 70%; margin-left: auto; margin-right: auto; } </style> </head> <body> <div style="text-align: center"> <asp:ContentPlaceHolder ID="head" runat="server"> </asp:ContentPlaceHolder> </div> <form id="form1" runat="server"> <div class="outer"> <div class="inner"> <asp:ContentPlaceHolder ID="ContentPlaceHolder1" runat="server"> </asp:ContentPlaceHolder> </div> </div> </form> </body> </html> <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="test.aspx.cs" Inherits="WorkRecordr.test" MasterPageFile="~/Master.Master" %> <asp:Content ContentPlaceHolderID="ContentPlaceHolder1" runat="server"> aaaaaaaaa***aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa***aaaaaaaaaaaaaaaa </asp:Content>

    Read the article

  • top tweets WebLogic Partner Community – November 2011

    - by JuergenKress
    Send us your tweets @wlscommunity #WebLogicCommunity and follow us on twitter http://twitter.com/wlscommunity glassfish GlassFish Marek’s JAX-RS 2.0 content from Devoxx 2011 – bit.ly/sp2NJO chriscmuir chriscmuir New blog post: ADF bug: missing af:column borders in af:table for IE7 – t.co/81np2jug chriscmuir chriscmuir Reading: Oracle’s ADF Rich Client User Interface (RCUI) Guidelines – oracle.com/webfolder/ux/m… netbeans NetBeans Team Bottlenecks be gone! #Java Performance Tuning workshop in Munich w Kirk Pepperdine, Nov 29-Dec 2: ow.ly/7Akh5 OracleBlogs OracleBlogs Creating ADF Faces Comamnd Button at Runtime ow.ly/1fM9dE alexismp Alexis MP blogged "GlassFish Back from Devoxx 2011, Mature Java EE 6 and EE 7 well on its way" – bit.ly/rP8LV0 JDeveloper JDeveloper & ADF Usage of jQuery in ADF dlvr.it/x3t84 20 hours ago Favorite Retweet Reply OTNArchBeat OTNArchBeat Webcast: Introducing Oracle WebLogic Server 12c: Developer Deep Dive – Dec 1 – 11am PT / 2pm ET bit.ly/t61W4G oraclepartners ORCL PartnerNetwork Brand new Oracle WebLogic 12c will launch on December 1, 10AM PT with a global Webcast highlighting salient… t.co/aflQQ3IX OracleBlogs OracleBlogs JDeveloper and ADF at UKOUG t.co/2CQTiB9n fnimphiu Frank Nimphius Attending UKOUG? All ADF sessions at a glance: t.co/TcMNTMXp 21 Nov Favorite Retweet Reply JDeveloper JDeveloper & ADF Free Webinar ‘ADF Task Flows for Beginners’, information and registration t.co/66jXnGgo via javafx4you javafx4you Java Developer Workshop #2 – Dec 1, 2011 @ Oracle Aoyama center in Tokyo t.co/8p9q3W2B AMIS_Services AMIS Services #vacature #Oracle #ADF ontwikkelaars. bit.ly/AMISADF Gun jezelf een nieuwe uitdaging? Meer op: dld.bz/azZ5N OracleBlogs OracleBlogs Launch Invitation: Introducing Oracle WebLogic Server 12c t.co/bRxCKwAk fnimphiu Frank Nimphius The brand new WebLogic 12c will be released on December 1st 2011 !!! Register for online launch event t.co/pPScg4Xh glassfish GlassFish Announcing Oracle WebLogic 12c – t.co/qh8TdFEl AdamBien Adam Bien Sun Coding Conventions–The Only Standard (Stop Inventing): Code written according to the Sun Coding Conventions… t.co/qaUWp5Mz wlscommunity WebLogic Community Launch Invitation: Introducing Oracle WebLogic Server 12c wp.me/p1LMIb-4y andrejusb Andrejus Baranovskis Andrejus Baranovskis’s Blog: Custom Exception Registration for ADF BC EO Attribute fb.me/1m6nXQD52 MNEMONIC01 Michel Schildmeijer Blog by Michel Schildmeijer: "Oracle WebLogic 12c has been announced" bit.ly/vk6WQL glassfish GlassFish Tab Sweep – Coherence, SBT for GlassFish, OSGi in question, Java EE plugins, … t.co/tVIL95lj OracleBlogs OracleBlogs JavaFX 2.0 at Devoxx 2011 ow.ly/1fJ5iT JDeveloper JDeveloper & ADF Experimenting with ADF BC Application Module Pool Tuning dlvr.it/wjLC1 OracleWebLogic Oracle WebLogic Brand New #WebLogic 12c Launch Event, Dec 1 10am PT. Hasan Rizvi, SVP Fusion Middleware. Developer session. bit.ly/weblogic12clau… JDeveloper JDeveloper & ADF PopUp and Esc/Cancel operations. ADF 11g dlvr.it/whrmC JDeveloper JDeveloper & ADF BPM Workspace: issue loading ADF task flows t.co/vk1gKPx5 OpenJDK OpenJDK Kelly O’Hair — OpenJDK B24 Available : t.co/1bFws6Nw JDeveloper JDeveloper & ADF Oracle ADF setting Task flow to use same page definition file of caller page t.co/9k6UIoYZ JDeveloper JDeveloper & ADF Master Detail Data presentation and CRUD Operations. Detail records in an Editable Popup. ADF 11g t.co/H8uudR0Y JDeveloper JDeveloper & ADF Entity Attribute Validation Rule (Business Rule) based on Master View Object Attribute Example ADF 11g t.co/1agxEQcZ oracletechnet Justin Kestelyn Webcast: Oracle WebLogic Server 12c Launch/Developer Deep-Dive (Dec. 1) t.co/OVBdGKzC JDeveloper JDeveloper & ADF How to render different node icons for different tree levels dlvr.it/wY2jL JDeveloper JDeveloper & ADF Query Component with ‘dynamic’ view criteria dlvr.it/wXlF1 JDeveloper JDeveloper & ADF How to play Flash .swf file in Oracle ADF application t.co/zaSONWAH Devoxx Devoxx Duke at the #Devoxx 2011 Noxx Party! pic.twitter.com/bVJWyu1Z brhubart Bob Rhubart Adam Leftik: JavaEE adoption continues to increase, reaching 40+ million downloads this year. #qconsf11 JDeveloper JDeveloper & ADF Free #ODTUG Seminar – #ADF Task Flows for Beginners – sign up today. www3.gotomeeting.com/register/13372… java Java New Project: OpenJFX j.mp/tI4k3s #javafx #openjdk #devoxx << JavaFX is open source! /via frankmunz Frank Munz WebLogic 12c launch event Dec 1st. t.co/jQKinBqN brhubart Bob Rhubart Spring to Java EE Migration | David Heffelfinger feedly.com/k/td8ccG odtug ODTUG Mark your calendars and register for our upcoming webinars: bit.ly/dWKG1C ADF Task Flows & Measuring Scalability & Performance w/TCP myfear Markus Eisele Anybody willing to take this question? Using #JavaMail with #Weblogic Server bit.ly/stJOET AMIS_Services AMIS Services 20-22 december #training #Oracle JHeadstart #11g, productief ontwikkelen met ADF. Schrijf je in op: amis.nl/trainingen/ora… AdamBien Adam Bien Stress Testing Java EE 6 Applications – Free Article In Free Java Magazine: In the November / December 2011 issu… bit.ly/vmzKkc java Java New Tech Article: Spring to #JavaEE Migration t.co/0EvdHNxb OracleBlogs OracleBlogs WebLogic Java record SPARC T4-4 Servers Set World Record on SPECjEnterprise2010 t.co/Eu1b6ZE0 OracleBlogs OracleBlogs What Is JavaFX? ow.ly/1frb6I OTNArchBeat OTNArchBeat The openJDK Windows Binary Download | Adam Bien ow.ly/7fRiG wlscommunity WebLogic Community WebLogic – Java record – SPARC T4-4 Servers Set World Record on SPECjEnterprise2010 glassfish GlassFish "youtube.com/java" blogs.oracle.com/theaquarium/en… OTNArchBeat OTNArchBeat Beta Testing Concludes: 1Z1-102 – "Oracle WebLogic Server 11g: System Administration I" (Oracle Certification) ow.ly/7fJCl wlscommunity WebLogic Community A deep dive in Oracle WebLogic! @ Contribute – November 29th, 2011 Kontich Belgium wp.me/p1LMIb-4u glassfish GlassFish Gartner’s Latest Enterprise Application Server Magic Quadrant – Oracle’s leadership t.co/aYDqipD8 OpenJDK OpenJDK Terrence Barr – Open sourcing of JavaFX: OpenJFX Project proposed – bit.ly/uKVnEl OpenJDK OpenJDK Maurizio Cimadamore – Testing overload resolution: bit.ly/vgXAbQ java Java Java User Groups Roundup, November 2011 : t.co/hea6vVnk /via @robilad << in German JavaSpotlight The Java Spotlight Java Spotlight Episode 54: Stuart Marks on the Coinification of JDK7 goo.gl/fb/3UXoM OTNArchBeat OTNArchBeat Article Series: Migrating Spring to Java EE 6 | Arun Gupta bit.ly/twUJtz glassfish GlassFish New Java EE 6 Hands-On lab, Devoxx-approved! bit.ly/vup5uE java Java Brian Goetz’s enthusiasm for Java is palpable! #devoxx interview adf_emg ADF EMG "ADF testing with a mock framework" – what is a mock framework? Visit the forum and see: groups.google.com/forum/#!topic/… java Java Taping a bunch of interviews today with Java experts at #devoxx. View on Parleys.com tomorrow. glassfish GlassFish New screencast to configure and run a cross-machine cluster using GlassFish 3.1.1 in < 7 mins faissalb.blogspot.com/2011/11/glassf… (via @bfaissal) glassfish GlassFish Oracle Contributor Agreements – New Home! bit.ly/tD2eLo OTNArchBeat OTNArchBeat Java Magazine – by and for the Java Community- inaugural issue bit.ly/tTv8UD OTNArchBeat OTNArchBeat The Heroes of Java: Michael Hüttermann | @MyFear bit.ly/rYYOFe javafx4you javafx4you Development with #JavaFX on #Linux j.mp/uOpe69 #not_for_the_faint_of_heart java Java Contribute Technical Questions for Java Experts at #devoxx bit.ly/up2cN0 netbeans NetBeans Team A simple REST service using #NetBeans 7, #Java Servlet, and #JAXB: t.co/pKkufsD8 AdamBien Adam Bien The most beautiful, and portable slide of the whole #jaxcon for "Die Hard Java EE 6"session checked-in: kenai.com/projects/javae… jaxlondon JAX London Mark Little’s (@nmcl) excellent keynote from #jaxlondon ‘Middleware Everywhere…’ is available in full – t.co/8vBmtDJ1 AdamBien Adam Bien Calculator sample from "Die Hard Java EE 6" #jaxcon session checked-in: t.co/0UqaULfg OTNArchBeat OTNArchBeat ADF Faces – a logic bomb in the order of bean instantiations | @ChrisCMuir bit.ly/vjqRaZ OracleBlogs OracleBlogs ODI 11g y JMS Queue de Weblogic ow.ly/1fzfQJ frankmunz Frank Munz Which WebLogic book do you recommend? Review of S. Alapati’s WebLogic 11g Administration Handbook. bit.ly/rP0RtW JDeveloper JDeveloper & ADF PageFlowScope with Unbounded Task Flows: the magic sauce for multi-browser-tab support in JDeveloper ADF applications dlvr.it/vNFgn OracleBlogs OracleBlogs 3 New ADF Insider Essential training videos published. ow.ly/1fz94q OracleBlogs OracleBlogs Weblogic Server 11gR1 PS2: Administration Essentials book and eBook t.co/ykzwIaqs OracleBlogs OracleBlogs Specialized Partners Only! New Service to Promote Your Events t.co/qTgyEpY4 wlscommunity WebLogic Community Oracle Weblogic Server 11gR1 PS2: Administration Essentials book and eBook andrejusb Andrejus Baranovskis Andrejus Baranovskis’s Blog: Stress Testing Oracle ADF BC Applications – Intern… andrejusb.blogspot.com/2011/11/stress… OracleBlogs OracleBlogs Frank Nimphius presenting a full day of Oracle ADF in Switzerland ow.ly/1fxU78 java Java #JavaEE and #GlassFish: #JavaOne11 Slides, Demos, Replays, Hands-on Labs t.co/tLM0ehrD OracleBlogs OracleBlogs weblogic.security.SecurityInitializationException: Authentication for user weblogic denied ow.ly/1fxmiu glassfish GlassFish The Last Migration – GlassFish Wiki : t.co/Dc5FT1SJ OTNArchBeat OTNArchBeat A Successful Year of @MiddlewareMagic t.co/amcGGTTk OracleWebLogic Oracle WebLogic Unbeatable Performance for your Cloud Applications with Exalogic, #OracleCoherence and #WebLogic. ow.ly/7lYKm OTNArchBeat OTNArchBeat Stress Testing Oracle ADF BC Applications – Passivation and Activation | @AndrejusB bit.ly/sASssL OTNArchBeat OTNArchBeat Review: "Oracle Weblogic Server 11gR1 PS2: Administration Essentials" by Michel Schildmeijer | @MyFear t.co/ll6ra0J9 OTNArchBeat OTNArchBeat GlassFish 3.1.2 themes and features | The Aquarium bit.ly/vVqr9r Andre_van_Dalen Andre van Dalen Masterclass: Advanced Oracle ADF 11g lnkd.in/M_45Pi AdamBien Adam Bien The "lunch" edition of RentACar is pushed into: kenai.com/projects/javae… #wjax AdamBien Adam Bien In munich, room munich at #wjax. Welcome to #javaee workshop. Gather your questions. 15 minutes to go lucasjellema Lucas Jellema Review by Markus of Michel’s book: t.co/41U9wvOb In short: valuable for novice WLS users, maybe not so much for die-hard WLS admin. biemond Edwin Biemond “@myfear: [blog] #Review: "#Weblogic Server 11gR1 PS2: Administration Essentials" t.co/LsODcb3e” got the same conclusion on amazon glassfish GlassFish Practical advice for deploying Lift apps to GlassFish: bit.ly/t3KUml glassfish GlassFish The unbearable lightness of GlassFish t.co/v9307SEJ javafx4you javafx4you Building Java EE applications in JavaFX: JavaFX 2.0, FXML and Spring j.mp/tiMDUh andrejusb Andrejus Baranovskis Andrejus Baranovskis’s Blog: Stress Testing Oracle ADF BC Applications – Passiv… andrejusb.blogspot.com/2011/11/stress… wlscommunity WebLogic Community “@AMIS_Services: Follow @amis_services To Win a copy of SOA Suite 11g Handbook by @lucasjellema dld.bz/axD22 pls RT” excellent book! glassfish GlassFish GlassFish 3.1.2 themes and features bit.ly/uEc6uZ biemond Edwin Biemond Weblogic pre-sales exam was hard, you really need to know the versions , upgrade path and have a score above 80% monkchips James Governor The Rise and Fall and Rise of Java. JAX 2011 london keynote. how big data and the web are floating the boat. slidesha.re/u3Kzlo glassfish GlassFish Tab Sweep – Jersey, Hudson, GlassFish Hosting, GC’s compared, Spring to JavaEE, Modularity, … bit.ly/u9Cc30 oracletechnet Justin Kestelyn Oracle Tuxedo: A renewed acquaintance t.co/gp0mmf20 OTNArchBeat OTNArchBeat Oracle Enterprise Pack for Eclipse, OEPE 11.1.1.8 bit.ly/tC3eKp OracleBlogs OracleBlogs NetBeans HTML Editor and Groovy Editor in a Multiview Component (Part 2) ow.ly/1ftCeI myfear Markus Eisele [blog] #Oracle 2008 – 2011 in Gartners Magic Quadrant for Enterprise Application Servers t.co/2Bs1vgMZ myfear Markus Eisele [blog] #EclipseCon Europe – Java 7 in the Enterprise goo.gl/fb/r80df #ece2011 #java7 javafx4you javafx4you JavaFX 2.0 for Mac build b07 (developer preview) is available for download j.mp/vSwmBP Enjoy! #JavaFX #Mac OracleBlogs OracleBlogs A deep dive in Oracle WebLogic! @ Contribute November 29th, 2011 Kontich Belgium ow.ly/1fsEZs arungupta Arun Gupta #JavaEE7 slides from #jaxlondon and #jfall11 now available: slidesha.re/sh4iFq AdamBien Adam Bien Just checked-in the results of the #jaxlondon community night (somehow beer related): kenai.com/projects/javae… glassfish GlassFish GlassFish Podcast Episode #080 – User Stories, Part 3: Adam Bien and Sean Comerford (ESPN) blogs.oracle.com/glassfishpodca… glassfish GlassFish Story: t.co/jQPqihJb using GlassFish blogs.oracle.com/stories/entry/… "3000+ requests/sec" and more enterprisejava Java EE Mentions New blog post WebLogic deployment status checks for CI wp.me/pOOSs-F #weblogic #continuousintegration /vi… bit.ly/uZz0fk The become a member in the WebLogic Partner Community please first login at http://partner.oracle.com and then visit: http://www.oracle.com/partners/goto/wls-emea Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: twitter,WebLogic,WebLogic Community,OPN,Oracle,Jürgen Kress

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Point of contact of 2 OBBs?

    - by Milo
    I'm working on the physics for my GTA2-like game so I can learn more about game physics. The collision detection and resolution are working great. I'm now just unsure how to compute the point of contact when I hit a wall. Here is my OBB class: public class OBB2D { private Vector2D projVec = new Vector2D(); private static Vector2D projAVec = new Vector2D(); private static Vector2D projBVec = new Vector2D(); private static Vector2D tempNormal = new Vector2D(); private Vector2D deltaVec = new Vector2D(); // Corners of the box, where 0 is the lower left. private Vector2D corner[] = new Vector2D[4]; private Vector2D center = new Vector2D(); private Vector2D extents = new Vector2D(); private RectF boundingRect = new RectF(); private float angle; //Two edges of the box extended away from corner[0]. private Vector2D axis[] = new Vector2D[2]; private double origin[] = new double[2]; public OBB2D(float centerx, float centery, float w, float h, float angle) { for(int i = 0; i < corner.length; ++i) { corner[i] = new Vector2D(); } for(int i = 0; i < axis.length; ++i) { axis[i] = new Vector2D(); } set(centerx,centery,w,h,angle); } public OBB2D(float left, float top, float width, float height) { for(int i = 0; i < corner.length; ++i) { corner[i] = new Vector2D(); } for(int i = 0; i < axis.length; ++i) { axis[i] = new Vector2D(); } set(left + (width / 2), top + (height / 2),width,height,0.0f); } public void set(float centerx,float centery,float w, float h,float angle) { float vxx = (float)Math.cos(angle); float vxy = (float)Math.sin(angle); float vyx = (float)-Math.sin(angle); float vyy = (float)Math.cos(angle); vxx *= w / 2; vxy *= (w / 2); vyx *= (h / 2); vyy *= (h / 2); corner[0].x = centerx - vxx - vyx; corner[0].y = centery - vxy - vyy; corner[1].x = centerx + vxx - vyx; corner[1].y = centery + vxy - vyy; corner[2].x = centerx + vxx + vyx; corner[2].y = centery + vxy + vyy; corner[3].x = centerx - vxx + vyx; corner[3].y = centery - vxy + vyy; this.center.x = centerx; this.center.y = centery; this.angle = angle; computeAxes(); extents.x = w / 2; extents.y = h / 2; computeBoundingRect(); } //Updates the axes after the corners move. Assumes the //corners actually form a rectangle. private void computeAxes() { axis[0].x = corner[1].x - corner[0].x; axis[0].y = corner[1].y - corner[0].y; axis[1].x = corner[3].x - corner[0].x; axis[1].y = corner[3].y - corner[0].y; // Make the length of each axis 1/edge length so we know any // dot product must be less than 1 to fall within the edge. for (int a = 0; a < axis.length; ++a) { float l = axis[a].length(); float ll = l * l; axis[a].x = axis[a].x / ll; axis[a].y = axis[a].y / ll; origin[a] = corner[0].dot(axis[a]); } } public void computeBoundingRect() { boundingRect.left = JMath.min(JMath.min(corner[0].x, corner[3].x), JMath.min(corner[1].x, corner[2].x)); boundingRect.top = JMath.min(JMath.min(corner[0].y, corner[1].y),JMath.min(corner[2].y, corner[3].y)); boundingRect.right = JMath.max(JMath.max(corner[1].x, corner[2].x), JMath.max(corner[0].x, corner[3].x)); boundingRect.bottom = JMath.max(JMath.max(corner[2].y, corner[3].y),JMath.max(corner[0].y, corner[1].y)); } public void set(RectF rect) { set(rect.centerX(),rect.centerY(),rect.width(),rect.height(),0.0f); } // Returns true if other overlaps one dimension of this. private boolean overlaps1Way(OBB2D other) { for (int a = 0; a < axis.length; ++a) { double t = other.corner[0].dot(axis[a]); // Find the extent of box 2 on axis a double tMin = t; double tMax = t; for (int c = 1; c < corner.length; ++c) { t = other.corner[c].dot(axis[a]); if (t < tMin) { tMin = t; } else if (t > tMax) { tMax = t; } } // We have to subtract off the origin // See if [tMin, tMax] intersects [0, 1] if ((tMin > 1 + origin[a]) || (tMax < origin[a])) { // There was no intersection along this dimension; // the boxes cannot possibly overlap. return false; } } // There was no dimension along which there is no intersection. // Therefore the boxes overlap. return true; } public void moveTo(float centerx, float centery) { float cx,cy; cx = center.x; cy = center.y; deltaVec.x = centerx - cx; deltaVec.y = centery - cy; for (int c = 0; c < 4; ++c) { corner[c].x += deltaVec.x; corner[c].y += deltaVec.y; } boundingRect.left += deltaVec.x; boundingRect.top += deltaVec.y; boundingRect.right += deltaVec.x; boundingRect.bottom += deltaVec.y; this.center.x = centerx; this.center.y = centery; computeAxes(); } // Returns true if the intersection of the boxes is non-empty. public boolean overlaps(OBB2D other) { if(right() < other.left()) { return false; } if(bottom() < other.top()) { return false; } if(left() > other.right()) { return false; } if(top() > other.bottom()) { return false; } if(other.getAngle() == 0.0f && getAngle() == 0.0f) { return true; } return overlaps1Way(other) && other.overlaps1Way(this); } public Vector2D getCenter() { return center; } public float getWidth() { return extents.x * 2; } public float getHeight() { return extents.y * 2; } public void setAngle(float angle) { set(center.x,center.y,getWidth(),getHeight(),angle); } public float getAngle() { return angle; } public void setSize(float w,float h) { set(center.x,center.y,w,h,angle); } public float left() { return boundingRect.left; } public float right() { return boundingRect.right; } public float bottom() { return boundingRect.bottom; } public float top() { return boundingRect.top; } public RectF getBoundingRect() { return boundingRect; } public boolean overlaps(float left, float top, float right, float bottom) { if(right() < left) { return false; } if(bottom() < top) { return false; } if(left() > right) { return false; } if(top() > bottom) { return false; } return true; } public static float distance(float ax, float ay,float bx, float by) { if (ax < bx) return bx - ay; else return ax - by; } public Vector2D project(float ax, float ay) { projVec.x = Float.MAX_VALUE; projVec.y = Float.MIN_VALUE; for (int i = 0; i < corner.length; ++i) { float dot = Vector2D.dot(corner[i].x,corner[i].y,ax,ay); projVec.x = JMath.min(dot, projVec.x); projVec.y = JMath.max(dot, projVec.y); } return projVec; } public Vector2D getCorner(int c) { return corner[c]; } public int getNumCorners() { return corner.length; } public static float collisionResponse(OBB2D a, OBB2D b, Vector2D outNormal) { float depth = Float.MAX_VALUE; for (int i = 0; i < a.getNumCorners() + b.getNumCorners(); ++i) { Vector2D edgeA; Vector2D edgeB; if(i >= a.getNumCorners()) { edgeA = b.getCorner((i + b.getNumCorners() - 1) % b.getNumCorners()); edgeB = b.getCorner(i % b.getNumCorners()); } else { edgeA = a.getCorner((i + a.getNumCorners() - 1) % a.getNumCorners()); edgeB = a.getCorner(i % a.getNumCorners()); } tempNormal.x = edgeB.x -edgeA.x; tempNormal.y = edgeB.y - edgeA.y; tempNormal.normalize(); projAVec.equals(a.project(tempNormal.x,tempNormal.y)); projBVec.equals(b.project(tempNormal.x,tempNormal.y)); float distance = OBB2D.distance(projAVec.x, projAVec.y,projBVec.x,projBVec.y); if (distance > 0.0f) { return 0.0f; } else { float d = Math.abs(distance); if (d < depth) { depth = d; outNormal.equals(tempNormal); } } } float dx,dy; dx = b.getCenter().x - a.getCenter().x; dy = b.getCenter().y - a.getCenter().y; float dot = Vector2D.dot(dx,dy,outNormal.x,outNormal.y); if(dot > 0) { outNormal.x = -outNormal.x; outNormal.y = -outNormal.y; } return depth; } public Vector2D getMoveDeltaVec() { return deltaVec; } }; Thanks!

    Read the article

  • CodePlex Daily Summary for Monday, April 19, 2010

    CodePlex Daily Summary for Monday, April 19, 2010New Projects8085 Microprocessor simulator: This program allows you to write 8085 programs in assembly and run those programs on your PC. It comes with lots of help, plus you can put breakpo...Additional.NET framework: The Additional.Net framework extends the functionality of the .NET framework for easier application development. It is developed in C#.Astoria Contrib: A contrib project for filling the gaps in WCF Data Services, providing missing functionality or augmenting with T4 templates, helpers, etc.ClipoWeb: ClipoWeb is a web clipboard that allows you to copy text and files between computers. Users access a web page on the source and destination compute...elearning Center: Đây là một ứng dụng web viết hoàn toàn bằng Sliverlight. Ứng dụng này là một dạng elearning với đầy đủ chức năng và có khả năng tương tác tối đa v...Excel VSTO SQL Server Browser: Get Data from SQL Server and put it in Excel directly. The objective is to get more control about what do you need to pull and create automatic pro...Generic Tree Structure: Generic Base Classes that helps you to create complex tree structures without writing it again and again. Simply to use Like "var Node<Folder> fold...LAN Lordz LAN Party System: The LAN Lordz LAN Party System makes it easier for medium and large size events to track their attendance, sponsors, door prizes, tournaments, and...LiteFx: O LiteFx é um framework que ajuda na implementação de DDD (anêmico ou rico) ele foi desenvolvido por Douglas Aguiar (http://twitter.com/DouglasAgui...Managed UI Flow for ASP.NET MVC Framework: If your web application getting more complex, understanding and managing of complex UI flows(pageflow of application) getting harder and harder, If...Meus Exemplos: Meus ExemplosOrchard Blueprint Theme: Orchard BluePrint is a project that provides a WYSIWYG reference implementation of a Orchard theme to help designers get started with theme design....Outlook Social Network Connector - Avatar: Avatar 是一个开源的MS Outlook的插件,豆瓣用户可以在Outlook 2010中使用豆瓣。查看一封邮件中相关的收件人、发件人的用户广播、同城活动以及豆邮。不用上豆瓣也能方便了解好友动态。这个插件使用C#, .NET 4.0 开发。API 请求认证使用OAuth 认证。 (Avat...Quadro Tree: This is Quadri tree library.Sharepoint 2010 Alert Controller: In MOSS 2007 or Sharepoint Server 2010 if you want to see your alerts by list name you should use this tool.SharePoint Web Parts: The goal of this project is to develop a set of web parts for SharePoint.Silverlight Image Cropper: This is a silverlight 4 util that makes it easy to crop out a number images of a specific resolution screen or screens. ie. an easy way to crop ...SilverlightFTP: Silverlight ftp clientsplibex: libraries for sharepoint lists manipulationStardustExtensions: Official Extensions for StardustSwim Team Manager: Swim Team Manager is designed for managing and tracking administrative and performance information for your club, school, or swim team. Swim Tea...ToDoListWpf: A To Do List, I used it to manage my work items. I am sorry for my poor English.Trance Layer: TranceLayer is a fast and flexible logging or diagnostics framework for .Net. It allows you to plug it into an existing or new application with m...Unoficcial NeoFM.hu NowPlaying: A little windows tray program. Shows what's on neofm.hu right now.WabbitStudio Z80 Software Tools: The software suite provides all of the tools you need to create high quality Z80 software in Z80 assembly language, with a focus on TI calculators....WinToolbar: Windows.Toolbar is Silverlight library that implements common widgets that allows us to build a rich toolbar control in our applications, it incor...XP-More: XP-More is a tool that helps manage Windows 7 Virtual Machines (XP Mode and any other). Specifically, it makes duplication of VMs a no brainer - no...Yodelay .NET Framework Extensions: The Yodelay .NET Framework Extensions project provides a library of components that make many kinds of programming tasks simpler. These include bas...New ReleasesClipoWeb: ClipoWeb 1.0: First Beta release of the ClipoWeb web applicationDDDSample.Net: 0.8: This release contains all four versions of DDDSample.Net available in previous, 0.7 and a brand new one: Layered Model version. Layered Model demon...DotNetNuke Blueprint: 00.00.02: Added to this version CSS Reset Skin version including Grids This version will soon be updated with corresponding HTML version and DNN templateEsferatec.Text.RegularExpressions: 3.5.1003.1001: first stable release of the class; the assembly file is ready to use, the documentation is complete;Excel VSTO SQL Server Browser: Sample Only: Sample without Ribbon UI, if you close the TaskPane you will no longer able to open it without restart ExcelFolder Bookmarks: Folder Bookmarks 1.5.5: This is the latest version of Folder Bookmarks (1.5.5), with the new Archive Manager and Archive Viewer. It has an installer - it will create a dir...Gardens Point LEX: Gardens Point LEX, Version 1.1.3: The main distribution is a zip file. This contains the binary executable, documentation, source code and the examples. ChangesVersion 1.1.3 corre...Gardens Point Parser Generator: Gardens Point Parser Generator V1.4.0: The distribution is a zip archive which contains the binary executables, documentation, source code and examples. ChangesVersion 1.4.0 of GPPG has...HKGolden Express: HKGoldenExpress (Build 201004181455): New features: Added rating of each topic. (Note: This feature is availabe since Build 201004172120) Bug fix: Handle invalid XML character in XML...Home Access Plus+: v4.0.0.0 Beta: v4.0.0.0 Beta Change Log: Moved to using .net 4.0 New Silverlight Uploader Various .net 4 fixes and tweaks File Changes: All fixes have changedHTML Ruby: 6.21.6: Reduced performance hit on pages with heavy DOM manipulations Fixed issue where empty tags caused it to apply invalid spacing values Stop spaci...LINQ to VFP: LinqToVfp (v1.0.17.2): Modified to allow using RecNo as a primary key. This build requires IQToolkit v0.17b.Managed UI Flow for ASP.NET MVC Framework: Preview 1: The source available on this site, does not reflect the final state of the project, it is a preview of what will be shipping in the framework in th...MVVM Light Toolkit: MVVM Light Toolkit V3 SP1 (2): Super minor update to accommodate the new Blend 4 RC. Only changes: The path to the Blend 4 templates changed to be "My Documents\Expression\Blend...N2 CMS: 2.0 rc: N2 is a lightweight CMS framework for ASP.NET. It helps you build great web sites that anyone can update. Major Changes (1.5 -> 2.0 release candid...OpenGL ES 2.0 Compact Framework Wrapper: Sample application CAB with texturing: This took some time as it was pretty hard to get the texture loaded and setup so that it would bind to the sampler2D in the fragment shader. Featu...Orchard Blueprint Theme: 00.00.01: This is the first release of this project, still in a very alpha version. Very soon this release is to be updated with the HTML version of the them...RoughJs: RoughJsSL: This is Silverlight library's CompilerSharepoint 2010 Alert Controller: Sharepoint 2010 Alert Controller: After you download WSP file you can get help from Home PageSharePoint LogViewer: SharePoint LogViewer 2.5: Minimize log viewer to tray Get popup notification of SharePoint log events from tray Redirect log entries to event log Send email notifications on...Site Directory for SharePoint 2010 (from Microsoft Consulting Services, UK): v1.1: This is a minor update which includes the following changes: Code consolidation across the whole project Additional site data captured. See solut...Stardust: Stardust 1.0: First stable version of Stardust (Build 172)StardustExtensions: Facebook Extension: Extension for stardust to upload and post images on Facebook.StardustExtensions: Facebook Extension (Source): The source code of an extension for Stardust used to post images on facebook.StardustExtensions: WPF Example: This is an example extension. Uses WPF to create a Window and say "Hello World!" Is a perfect download if want to start writing Stardust ExtensionsStardustExtensions: WPF Example Source: This is the source code of an extension that creates a Window using WPF & displays a simple text. Is great as an example of creating Stardust Exten...TFTP Server: TFTP Server 1.1 Beta installer: New MSI based installer Installs a TFTP service Supports multiple servers on different endpoints, with every server pointing to its own root di...TiledLib: TiledLib 1.1: This download is for prebuilt DLLs and a demo project. For the full source code, use the Source Code tab. Changes: Bug fixes in a few methods Ad...Trance Layer: TranceLayer Digger: Digger version is a beta. It is intended to be used as a demonstration of muscles while lacking a set of features that are in the docs. The set of ...uManage - Active Directory Self-Service Portal: uManage v1.2 (.NET 4.0 RTM): New Releasev1.2 Adds the Administrative Portal as well as the requirement of a MSSQL database (2005+). The Setup Wizard has also been updated to i...Unoficcial NeoFM.hu NowPlaying: NeoNotifier: First release. Aplha, but usable.VidCoder: 0.2.1: Changes: Added 2-pass encoding Fixed x264 options getting mangled during p-invoke Fixed intermittent crash with logging window open due to thre...WCF RIA Services Contrib: WCF RIA Services Contrib RC2 Release: This version is for the WCF RIA Services RC2 (SL4 RTM) release. The ApplyState has been modifed in this version to disable validation during proces...WiiCIS.NET: WiiCIS.NET v0.2: Changes... - Removal of WiimoteManager, connection must be done manually - Accelerometer orientation was originally in degrees, is now in radians -...WinToolbar: WinToolbar Source code plus sample: This zip file contains the current version source code and libraries plus a testrunner (sample app).XP-More: 0.9 (Beta): Most of the functionality is in place, final polishing will be done soon.Most Popular ProjectsFacebook Developer ToolkitWSPBuilder (SharePoint WSP tool)QuickGraph, Graph Data Structures And Algorithms for .NetPerformance Analysis of Logs (PAL) Toolpatterns & practices: Team Development with Visual Studio Team Foundation ServerTFS Integration Platformpatterns & practices: Performance Testing Guidance for Web Applicationspatterns & practices: Enterprise Library ContribJSON ViewerManaged Wifi APIMost Active ProjectsRawrpatterns & practices – Enterprise LibraryIndustrial DashboardIonics Isapi Rewrite FilterFarseer Physics EngineMVVM Light ToolkitjQuery Library for SharePoint Web ServicesN2 CMSCaliburn: An Application Framework for WPF and SilverlightBlogEngine.NET

    Read the article

  • How to Avoid Your Next 12-Month Science Project

    - by constant
    While most customers immediately understand how the magic of Oracle's Hybrid Columnar Compression, intelligent storage servers and flash memory make Exadata uniquely powerful against home-grown database systems, some people think that Exalogic is nothing more than a bunch of x86 servers, a storage appliance and an InfiniBand (IB) network, built into a single rack. After all, isn't this exactly what the High Performance Computing (HPC) world has been doing for decades? On the surface, this may be true. And some people tried exactly that: They tried to put together their own version of Exalogic, but then they discover there's a lot more to building a system than buying hardware and assembling it together. IT is not Ikea. Why is that so? Could it be there's more going on behind the scenes than merely putting together a bunch of servers, a storage array and an InfiniBand network into a rack? Let's explore some of the special sauce that makes Exalogic unique and un-copyable, so you can save yourself from your next 6- to 12-month science project that distracts you from doing real work that adds value to your company. Engineering Systems is Hard Work! The backbone of Exalogic is its InfiniBand network: 4 times better bandwidth than even 10 Gigabit Ethernet, and only about a tenth of its latency. What a potential for increased scalability and throughput across the middleware and database layers! But InfiniBand is a beast that needs to be tamed: It is true that Exalogic uses a standard, open-source Open Fabrics Enterprise Distribution (OFED) InfiniBand driver stack. Unfortunately, this software has been developed by the HPC community with fastest speed in mind (which is good) but, despite the name, not many other enterprise-class requirements are included (which is less good). Here are some of the improvements that Oracle's InfiniBand development team had to add to the OFED stack to make it enterprise-ready, simply because typical HPC users didn't have the need to implement them: More than 100 bug fixes in the pieces that were not related to the Message Passing Interface Protocol (MPI), which is the protocol that HPC users use most of the time, but which is less useful in the enterprise. Performance optimizations and tuning across the whole IB stack: From Switches, Host Channel Adapters (HCAs) and drivers to low-level protocols, middleware and applications. Yes, even the standard HPC IB stack could be improved in terms of performance. Ethernet over IB (EoIB): Exalogic uses InfiniBand internally to reach high performance, but it needs to play nicely with datacenters around it. That's why Oracle added Ethernet over InfiniBand technology to it that allows for creating many virtual 10GBE adapters inside Exalogic's nodes that are aggregated and connected to Exalogic's IB gateway switches. While this is an open standard, it's up to the vendor to implement it. In this case, Oracle integrated the EoIB stack with Oracle's own IB to 10GBE gateway switches, and made it fully virtualized from the beginning. This means that Exalogic customers can completely rewire their server infrastructure inside the rack without having to physically pull or plug a single cable - a must-have for every cloud deployment. Anybody who wants to match this level of integration would need to add an InfiniBand switch development team to their project. Or just buy Oracle's gateway switches, which are conveniently shipped with a whole server infrastructure attached! IPv6 support for InfiniBand's Sockets Direct Protocol (SDP), Reliable Datagram Sockets (RDS), TCP/IP over IB (IPoIB) and EoIB protocols. Because no IPv6 = not very enterprise-class. HA capability for SDP. High Availability is not a big requirement for HPC, but for enterprise-class application servers it is. Every node in Exalogic's InfiniBand network is connected twice for redundancy. If any cable or port or HCA fails, there's always a replacement link ready to take over. This requires extra magic at the protocol level to work. So in addition to Weblogic's failover capabilities, Oracle implemented IB automatic path migration at the SDP level to avoid unnecessary failover operations at the middleware level. Security, for example spoof-protection. Another feature that is less important for traditional users of InfiniBand, but very important for enterprise customers. InfiniBand Partitioning and Quality-of-Service (QoS): One of the first questions we get from customers about Exalogic is: “How can we implement multi-tenancy?” The answer is to partition your IB network, which effectively creates many networks that work independently and that are protected at the lowest networking layer possible. In addition to that, QoS allows administrators to prioritize traffic flow in multi-tenancy environments so they can keep their service levels where it matters most. Resilient IB Fabric Management: InfiniBand is a self-managing network, so a lot of the magic lies in coming up with the right topology and in teaching the subnet manager how to properly discover and manage the network. Oracle's Infiniband switches come with pre-integrated, highly available fabric management with seamless integration into Oracle Enterprise Manager Ops Center. In short: Oracle elevated the OFED InfiniBand stack into an enterprise-class networking infrastructure. Many years and multiple teams of manpower went into the above improvements - this is something you can only get from Oracle, because no other InfiniBand vendor can give you these features across the whole stack! Exabus: Because it's not About the Size of Your Network, it's How You Use it! So let's assume that you somehow were able to get your hands on an enterprise-class IB driver stack. Or maybe you don't care and are just happy with the standard OFED one? Anyway, the next step is to actually leverage that InfiniBand performance. Here are the choices: Use traditional TCP/IP on top of the InfiniBand stack, Develop your own integration between your middleware and the lower-level (but faster) InfiniBand protocols. While more bandwidth is always a good thing, it's actually the low latency that enables superior performance for your applications when running on any networking infrastructure: The lower the latency, the faster the response travels through the network and the more transactions you can close per second. The reason why InfiniBand is such a low latency technology is that it gets rid of most if not all of your traditional networking protocol stack: Data is literally beamed from one region of RAM in one server into another region of RAM in another server with no kernel/drivers/UDP/TCP or other networking stack overhead involved! Which makes option 1 a no-go: Adding TCP/IP on top of InfiniBand is like adding training wheels to your racing bike. It may be ok in the beginning and for development, but it's not quite the performance IB was meant to deliver. Which only leaves option 2: Integrating your middleware with fast, low-level InfiniBand protocols. And this is what Exalogic's "Exabus" technology is all about. Here are a few Exabus features that help applications leverage the performance of InfiniBand in Exalogic: RDMA and SDP integration at the JDBC driver level (SDP), for Oracle Weblogic (SDP), Oracle Coherence (RDMA), Oracle Tuxedo (RDMA) and the new Oracle Traffic Director (RDMA) on Exalogic. Using these protocols, middleware can communicate a lot faster with each other and the Oracle database than by using standard networking protocols, Seamless Integration of Ethernet over InfiniBand from Exalogic's Gateway switches into the OS, Oracle Weblogic optimizations for handling massive amounts of parallel transactions. Because if you have an 8-lane Autobahn, you also need to improve your ramps so you can feed it with many cars in parallel. Integration of Weblogic with Oracle Exadata for faster performance, optimized session management and failover. As you see, “Exabus” is Oracle's word for describing all the InfiniBand enhancements Oracle put into Exalogic: OFED stack enhancements, protocols for faster IB access, and InfiniBand support and optimizations at the virtualization and middleware level. All working together to deliver the full potential of InfiniBand performance. Who else has 100% control over their middleware so they can develop their own low-level protocol integration with InfiniBand? Even if you take an open source approach, you're looking at years of development work to create, test and support a whole new networking technology in your middleware! The Extras: Less Hassle, More Productivity, Faster Time to Market And then there are the other advantages of Engineered Systems that are true for Exalogic the same as they are for every other Engineered System: One simple purchasing process: No headaches due to endless RFPs and no “Will X work with Y?” uncertainties. Everything has been engineered together: All kinds of bugs and problems have been already fixed at the design level that would have only manifested themselves after you have built the system from scratch. Everything is built, tested and integrated at the factory level . Less integration pain for you, faster time to market. Every Exalogic machine world-wide is identical to Oracle's own machines in the lab: Instant replication of any problems you may encounter, faster time to resolution. Simplified patching, management and operations. One throat to choke: Imagine finger-pointing hell for systems that have been put together using several different vendors. Oracle's Engineered Systems have a single phone number that customers can call to get their problems solved. For more business-centric values, read The Business Value of Engineered Systems. Conclusion: Buy Exalogic, or get ready for a 6-12 Month Science Project And here's the reason why it's not easy to "build your own Exalogic": There's a lot of work required to make such a system fly. In fact, anybody who is starting to "just put together a bunch of servers and an InfiniBand network" is really looking at a 6-12 month science project. And the outcome is likely to not be very enterprise-class. And it won't have Exalogic's performance either. Because building an Engineered System is literally rocket science: It takes a lot of time, effort, resources and many iterations of design/test/analyze/fix to build such a system. That's why InfiniBand has been reserved for HPC scientists for such a long time. And only Oracle can bring the power of InfiniBand in an enterprise-class, ready-to use, pre-integrated version to customers, without the develop/integrate/support pain. For more details, check the new Exalogic overview white paper which was updated only recently. P.S.: Thanks to my colleagues Ola, Paul, Don and Andy for helping me put together this article! var flattr_uid = '26528'; var flattr_tle = 'How to Avoid Your Next 12-Month Science Project'; var flattr_dsc = 'While most customers immediately understand how the magic of Oracle's Hybrid Columnar Compression, intelligent storage servers and flash memory make Exadata uniquely powerful against home-grown database systems, some people think that Exalogic is nothing more than a bunch of x86 servers, a storage appliance and an InfiniBand (IB) network, built into a single rack.After all, isn't this exactly what the High Performance Computing (HPC) world has been doing for decades?On the surface, this may be true. And some people tried exactly that: They tried to put together their own version of Exalogic, but then they discover there's a lot more to building a system than buying hardware and assembling it together. IT is not Ikea.Why is that so? Could it be there's more going on behind the scenes than merely putting together a bunch of servers, a storage array and an InfiniBand network into a rack? Let's explore some of the special sauce that makes Exalogic unique and un-copyable, so you can save yourself from your next 6- to 12-month science project that distracts you from doing real work that adds value to your company.'; var flattr_tag = 'Engineered Systems,Engineered Systems,Infiniband,Integration,latency,Oracle,performance'; var flattr_cat = 'text'; var flattr_url = 'http://constantin.glez.de/blog/2012/04/how-avoid-your-next-12-month-science-project'; var flattr_lng = 'en_GB'

    Read the article

  • Performance Optimization &ndash; It Is Faster When You Can Measure It

    - by Alois Kraus
    Performance optimization in bigger systems is hard because the measured numbers can vary greatly depending on the measurement method of your choice. To measure execution timing of specific methods in your application you usually use Time Measurement Method Potential Pitfalls Stopwatch Most accurate method on recent processors. Internally it uses the RDTSC instruction. Since the counter is processor specific you can get greatly different values when your thread is scheduled to another core or the core goes into a power saving mode. But things do change luckily: Intel's Designer's vol3b, section 16.11.1 "16.11.1 Invariant TSC The time stamp counter in newer processors may support an enhancement, referred to as invariant TSC. Processor's support for invariant TSC is indicated by CPUID.80000007H:EDX[8]. The invariant TSC will run at a constant rate in all ACPI P-, C-. and T-states. This is the architectural behavior moving forward. On processors with invariant TSC support, the OS may use the TSC for wall clock timer services (instead of ACPI or HPET timers). TSC reads are much more efficient and do not incur the overhead associated with a ring transition or access to a platform resource." DateTime.Now Good but it has only a resolution of 16ms which can be not enough if you want more accuracy.   Reporting Method Potential Pitfalls Console.WriteLine Ok if not called too often. Debug.Print Are you really measuring performance with Debug Builds? Shame on you. Trace.WriteLine Better but you need to plug in some good output listener like a trace file. But be aware that the first time you call this method it will read your app.config and deserialize your system.diagnostics section which does also take time.   In general it is a good idea to use some tracing library which does measure the timing for you and you only need to decorate some methods with tracing so you can later verify if something has changed for the better or worse. In my previous article I did compare measuring performance with quantum mechanics. This analogy does work surprising well. When you measure a quantum system there is a lower limit how accurately you can measure something. The Heisenberg uncertainty relation does tell us that you cannot measure of a quantum system the impulse and location of a particle at the same time with infinite accuracy. For programmers the two variables are execution time and memory allocations. If you try to measure the timings of all methods in your application you will need to store them somewhere. The fastest storage space besides the CPU cache is the memory. But if your timing values do consume all available memory there is no memory left for the actual application to run. On the other hand if you try to record all memory allocations of your application you will also need to store the data somewhere. This will cost you memory and execution time. These constraints are always there and regardless how good the marketing of tool vendors for performance and memory profilers are: Any measurement will disturb the system in a non predictable way. Commercial tool vendors will tell you they do calculate this overhead and subtract it from the measured values to give you the most accurate values but in reality it is not entirely true. After falling into the trap to trust the profiler timings several times I have got into the habit to Measure with a profiler to get an idea where potential bottlenecks are. Measure again with tracing only the specific methods to check if this method is really worth optimizing. Optimize it Measure again. Be surprised that your optimization has made things worse. Think harder Implement something that really works. Measure again Finished! - Or look for the next bottleneck. Recently I have looked into issues with serialization performance. For serialization DataContractSerializer was used and I was not sure if XML is really the most optimal wire format. After looking around I have found protobuf-net which uses Googles Protocol Buffer format which is a compact binary serialization format. What is good for Google should be good for us. A small sample app to check out performance was a matter of minutes: using ProtoBuf; using System; using System.Diagnostics; using System.IO; using System.Reflection; using System.Runtime.Serialization; [DataContract, Serializable] class Data { [DataMember(Order=1)] public int IntValue { get; set; } [DataMember(Order = 2)] public string StringValue { get; set; } [DataMember(Order = 3)] public bool IsActivated { get; set; } [DataMember(Order = 4)] public BindingFlags Flags { get; set; } } class Program { static MemoryStream _Stream = new MemoryStream(); static MemoryStream Stream { get { _Stream.Position = 0; _Stream.SetLength(0); return _Stream; } } static void Main(string[] args) { DataContractSerializer ser = new DataContractSerializer(typeof(Data)); Data data = new Data { IntValue = 100, IsActivated = true, StringValue = "Hi this is a small string value to check if serialization does work as expected" }; var sw = Stopwatch.StartNew(); int Runs = 1000 * 1000; for (int i = 0; i < Runs; i++) { //ser.WriteObject(Stream, data); Serializer.Serialize<Data>(Stream, data); } sw.Stop(); Console.WriteLine("Did take {0:N0}ms for {1:N0} objects", sw.Elapsed.TotalMilliseconds, Runs); Console.ReadLine(); } } The results are indeed promising: Serializer Time in ms N objects protobuf-net   807 1000000 DataContract 4402 1000000 Nearly a factor 5 faster and a much more compact wire format. Lets use it! After switching over to protbuf-net the transfered wire data has dropped by a factor two (good) and the performance has worsened by nearly a factor two. How is that possible? We have measured it? Protobuf-net is much faster! As it turns out protobuf-net is faster but it has a cost: For the first time a type is de/serialized it does use some very smart code-gen which does not come for free. Lets try to measure this one by setting of our performance test app the Runs value not to one million but to 1. Serializer Time in ms N objects protobuf-net 85 1 DataContract 24 1 The code-gen overhead is significant and can take up to 200ms for more complex types. The break even point where the code-gen cost is amortized by its faster serialization performance is (assuming small objects) somewhere between 20.000-40.000 serialized objects. As it turned out my specific scenario involved about 100 types and 1000 serializations in total. That explains why the good old DataContractSerializer is not so easy to take out of business. The final approach I ended up was to reduce the number of types and to serialize primitive types via BinaryWriter directly which turned out to be a pretty good alternative. It sounded good until I measured again and found that my optimizations so far do not help much. After looking more deeper at the profiling data I did found that one of the 1000 calls did take 50% of the time. So how do I find out which call it was? Normal profilers do fail short at this discipline. A (totally undeserved) relatively unknown profiler is SpeedTrace which does unlike normal profilers create traces of your applications by instrumenting your IL code at runtime. This way you can look at the full call stack of the one slow serializer call to find out if this stack was something special. Unfortunately the call stack showed nothing special. But luckily I have my own tracing as well and I could see that the slow serializer call did happen during the serialization of a bool value. When you encounter after much analysis something unreasonable you cannot explain it then the chances are good that your thread was suspended by the garbage collector. If there is a problem with excessive GCs remains to be investigated but so far the serialization performance seems to be mostly ok.  When you do profile a complex system with many interconnected processes you can never be sure that the timings you just did measure are accurate at all. Some process might be hitting the disc slowing things down for all other processes for some seconds as well. There is a big difference between warm and cold startup. If you restart all processes you can basically forget the first run because of the OS disc cache, JIT and GCs make the measured timings very flexible. When you are in need of a random number generator you should measure cold startup times of a sufficiently complex system. After the first run you can try again getting different and much lower numbers. Now try again at least two times to get some feeling how stable the numbers are. Oh and try to do the same thing the next day. It might be that the bottleneck you found yesterday is gone today. Thanks to GC and other random stuff it can become pretty hard to find stuff worth optimizing if no big bottlenecks except bloatloads of code are left anymore. When I have found a spot worth optimizing I do make the code changes and do measure again to check if something has changed. If it has got slower and I am certain that my change should have made it faster I can blame the GC again. The thing is that if you optimize stuff and you allocate less objects the GC times will shift to some other location. If you are unlucky it will make your faster working code slower because you see now GCs at times where none were before. This is where the stuff does get really tricky. A safe escape hatch is to create a repro of the slow code in an isolated application so you can change things fast in a reliable manner. Then the normal profilers do also start working again. As Vance Morrison does point out it is much more complex to profile a system against the wall clock compared to optimize for CPU time. The reason is that for wall clock time analysis you need to understand how your system does work and which threads (if you have not one but perhaps 20) are causing a visible delay to the end user and which threads can wait a long time without affecting the user experience at all. Next time: Commercial profiler shootout.

    Read the article

  • The fastest way to resize images from ASP.NET. And it’s (more) supported-ish.

    - by Bertrand Le Roy
    I’ve shown before how to resize images using GDI, which is fairly common but is explicitly unsupported because we know of very real problems that this can cause. Still, many sites still use that method because those problems are fairly rare, and because most people assume it’s the only way to get the job done. Plus, it works in medium trust. More recently, I’ve shown how you can use WPF APIs to do the same thing and get JPEG thumbnails, only 2.5 times faster than GDI (even now that GDI really ultimately uses WIC to read and write images). The boost in performance is great, but it comes at a cost, that you may or may not care about: it won’t work in medium trust. It’s also just as unsupported as the GDI option. What I want to show today is how to use the Windows Imaging Components from ASP.NET APIs directly, without going through WPF. The approach has the great advantage that it’s been tested and proven to scale very well. The WIC team tells me you should be able to call support and get answers if you hit problems. Caveats exist though. First, this is using interop, so until a signed wrapper sits in the GAC, it will require full trust. Second, the APIs have a very strong smell of native code and are definitely not .NET-friendly. And finally, the most serious problem is that older versions of Windows don’t offer MTA support for image decoding. MTA support is only available on Windows 7, Vista and Windows Server 2008. But on 2003 and XP, you’ll only get STA support. that means that the thread safety that we so badly need for server applications is not guaranteed on those operating systems. To make it work, you’d have to spin specialized threads yourself and manage the lifetime of your objects, which is outside the scope of this article. We’ll assume that we’re fine with al this and that we’re running on 7 or 2008 under full trust. Be warned that the code that follows is not simple or very readable. This is definitely not the easiest way to resize an image in .NET. Wrapping native APIs such as WIC in a managed wrapper is never easy, but fortunately we won’t have to: the WIC team already did it for us and released the results under MS-PL. The InteropServices folder, which contains the wrappers we need, is in the WicCop project but I’ve also included it in the sample that you can download from the link at the end of the article. In order to produce a thumbnail, we first have to obtain a decoding frame object that WIC can use. Like with WPF, that object will contain the command to decode a frame from the source image but won’t do the actual decoding until necessary. Getting the frame is done by reading the image bytes through a special WIC stream that you can obtain from a factory object that we’re going to reuse for lots of other tasks: var photo = File.ReadAllBytes(photoPath); var factory = (IWICComponentFactory)new WICImagingFactory(); var inputStream = factory.CreateStream(); inputStream.InitializeFromMemory(photo, (uint)photo.Length); var decoder = factory.CreateDecoderFromStream( inputStream, null, WICDecodeOptions.WICDecodeMetadataCacheOnLoad); var frame = decoder.GetFrame(0); We can read the dimensions of the frame using the following (somewhat ugly) code: uint width, height; frame.GetSize(out width, out height); This enables us to compute the dimensions of the thumbnail, as I’ve shown in previous articles. We now need to prepare the output stream for the thumbnail. WIC requires a special kind of stream, IStream (not implemented by System.IO.Stream) and doesn’t directlyunderstand .NET streams. It does provide a number of implementations but not exactly what we need here. We need to output to memory because we’ll want to persist the same bytes to the response stream and to a local file for caching. The memory-bound version of IStream requires a fixed-length buffer but we won’t know the length of the buffer before we resize. To solve that problem, I’ve built a derived class from MemoryStream that also implements IStream. The implementation is not very complicated, it just delegates the IStream methods to the base class, but it involves some native pointer manipulation. Once we have a stream, we need to build the encoder for the output format, which could be anything that WIC supports. For web thumbnails, our only reasonable options are PNG and JPEG. I explored PNG because it’s a lossless format, and because WIC does support PNG compression. That compression is not very efficient though and JPEG offers good quality with much smaller file sizes. On the web, it matters. I found the best PNG compression option (adaptive) to give files that are about twice as big as 100%-quality JPEG (an absurd setting), 4.5 times bigger than 95%-quality JPEG and 7 times larger than 85%-quality JPEG, which is more than acceptable quality. As a consequence, we’ll use JPEG. The JPEG encoder can be prepared as follows: var encoder = factory.CreateEncoder( Consts.GUID_ContainerFormatJpeg, null); encoder.Initialize(outputStream, WICBitmapEncoderCacheOption.WICBitmapEncoderNoCache); The next operation is to create the output frame: IWICBitmapFrameEncode outputFrame; var arg = new IPropertyBag2[1]; encoder.CreateNewFrame(out outputFrame, arg); Notice that we are passing in a property bag. This is where we’re going to specify our only parameter for encoding, the JPEG quality setting: var propBag = arg[0]; var propertyBagOption = new PROPBAG2[1]; propertyBagOption[0].pstrName = "ImageQuality"; propBag.Write(1, propertyBagOption, new object[] { 0.85F }); outputFrame.Initialize(propBag); We can then set the resolution for the thumbnail to be 96, something we weren’t able to do with WPF and had to hack around: outputFrame.SetResolution(96, 96); Next, we set the size of the output frame and create a scaler from the input frame and the computed dimensions of the target thumbnail: outputFrame.SetSize(thumbWidth, thumbHeight); var scaler = factory.CreateBitmapScaler(); scaler.Initialize(frame, thumbWidth, thumbHeight, WICBitmapInterpolationMode.WICBitmapInterpolationModeFant); The scaler is using the Fant method, which I think is the best looking one even if it seems a little softer than cubic (zoomed here to better show the defects): Cubic Fant Linear Nearest neighbor We can write the source image to the output frame through the scaler: outputFrame.WriteSource(scaler, new WICRect { X = 0, Y = 0, Width = (int)thumbWidth, Height = (int)thumbHeight }); And finally we commit the pipeline that we built and get the byte array for the thumbnail out of our memory stream: outputFrame.Commit(); encoder.Commit(); var outputArray = outputStream.ToArray(); outputStream.Close(); That byte array can then be sent to the output stream and to the cache file. Once we’ve gone through this exercise, it’s only natural to wonder whether it was worth the trouble. I ran this method, as well as GDI and WPF resizing over thirty twelve megapixel images for JPEG qualities between 70% and 100% and measured the file size and time to resize. Here are the results: Size of resized images   Time to resize thirty 12 megapixel images Not much to see on the size graph: sizes from WPF and WIC are equivalent, which is hardly surprising as WPF calls into WIC. There is just an anomaly for 75% for WPF that I noted in my previous article and that disappears when using WIC directly. But overall, using WPF or WIC over GDI represents a slight win in file size. The time to resize is more interesting. WPF and WIC get similar times although WIC seems to always be a little faster. Not surprising considering WPF is using WIC. The margin of error on this results is probably fairly close to the time difference. As we already knew, the time to resize does not depend on the quality level, only the size does. This means that the only decision you have to make here is size versus visual quality. This third approach to server-side image resizing on ASP.NET seems to converge on the fastest possible one. We have marginally better performance than WPF, but with some additional peace of mind that this approach is sanctioned for server-side usage by the Windows Imaging team. It still doesn’t work in medium trust. That is a problem and shows the way for future server-friendly managed wrappers around WIC. The sample code for this article can be downloaded from: http://weblogs.asp.net/blogs/bleroy/Samples/WicResize.zip The benchmark code can be found here (you’ll need to add your own images to the Images directory and then add those to the project, with content and copy if newer in the properties of the files in the solution explorer): http://weblogs.asp.net/blogs/bleroy/Samples/WicWpfGdiImageResizeBenchmark.zip WIC tools can be downloaded from: http://code.msdn.microsoft.com/wictools To conclude, here are some of the resized thumbnails at 85% fant:

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #038

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 CASE Statement in ORDER BY Clause – ORDER BY using Variable This article is as per request from the Application Development Team Leader of my company. His team encountered code where the application was preparing string for ORDER BY clause of the SELECT statement. Application was passing this string as variable to Stored Procedure (SP) and SP was using EXEC to execute the SQL string. This is not good for performance as Stored Procedure has to recompile every time due to EXEC. sp_executesql can do the same task but still not the best performance. SSMS – View/Send Query Results to Text/Grid/Files Results to Text – CTRL + T Results to Grid – CTRL + D Results to File – CTRL + SHIFT + F 2008 Introduction to SPARSE Columns Part 2 I wrote about Introduction to SPARSE Columns Part 1. Let us understand the concept of the SPARSE column in more detail. I suggest you read the first part before continuing reading this article. All SPARSE columns are stored as one XML column in the database. Let us see some of the advantage and disadvantage of SPARSE column. Deferred Name Resolution How come when table name is incorrect SP can be created successfully but when an incorrect column is used SP cannot be created? 2009 Backup Timeline and Understanding of Database Restore Process in Full Recovery Model In general, databases backup in full recovery mode is taken in three different kinds of database files. Full Database Backup Differential Database Backup Log Backup Restore Sequence and Understanding NORECOVERY and RECOVERY While doing RESTORE Operation if you restoring database files, always use NORECOVER option as that will keep the database in a state where more backup file are restored. This will also keep database offline also to prevent any changes, which can create itegrity issues. Once all backup file is restored run RESTORE command with a RECOVERY option to get database online and operational. Four Different Ways to Find Recovery Model for Database Perhaps, the best thing about technical domain is that most of the things can be executed in more than one ways. It is always useful to know about the various methods of performing a single task. Two Methods to Retrieve List of Primary Keys and Foreign Keys of Database When Information Schema is used, we will not be able to discern between primary key and foreign key; we will have both the keys together. In the case of sys schema, we can query the data in our preferred way and can join this table to another table, which can retrieve additional data from the same. Get Last Running Query Based on SPID PID is returns sessions ID of the current user process. The acronym SPID comes from the name of its earlier version, Server Process ID. 2010 SELECT * FROM dual – Dual Equivalent Dual is a table that is created by Oracle together with data dictionary. It consists of exactly one column named “dummy”, and one record. The value of that record is X. You can check the content of the DUAL table using the following syntax. SELECT * FROM dual Identifying Statistics Used by Query Someone asked this question in my training class of query optimization and performance tuning.  “Can I know which statistics were used by my query?” 2011 SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 14 of 31 What are the basic functions for master, msdb, model, tempdb and resource databases? What is the Maximum Number of Index per Table? Explain Few of the New Features of SQL Server 2008 Management Studio Explain IntelliSense for Query Editing Explain MultiServer Query Explain Query Editor Regions Explain Object Explorer Enhancements Explain Activity Monitors SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 15 of 31 What is Service Broker? Where are SQL server Usernames and Passwords Stored in the SQL server? What is Policy Management? What is Database Mirroring? What are Sparse Columns? What does TOP Operator Do? What is CTE? What is MERGE Statement? What is Filtered Index? Which are the New Data Types Introduced in SQL SERVER 2008? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 16 of 31 What are the Advantages of Using CTE? How can we Rewrite Sub-Queries into Simple Select Statements or with Joins? What is CLR? What are Synonyms? What is LINQ? What are Isolation Levels? What is Use of EXCEPT Clause? What is XPath? What is NOLOCK? What is the Difference between Update Lock and Exclusive Lock? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 17 of 31 How will you Handle Error in SQL SERVER 2008? What is RAISEERROR? What is RAISEERROR? How to Rebuild the Master Database? What is the XML Datatype? What is Data Compression? What is Use of DBCC Commands? How to Copy the Tables, Schema and Views from one SQL Server to Another? How to Find Tables without Indexes? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 18 of 31 How to Copy Data from One Table to Another Table? What is Catalog Views? What is PIVOT and UNPIVOT? What is a Filestream? What is SQLCMD? What do you mean by TABLESAMPLE? What is ROW_NUMBER()? What are Ranking Functions? What is Change Data Capture (CDC) in SQL Server 2008? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 19 of 31 How can I Track the Changes or Identify the Latest Insert-Update-Delete from a Table? What is the CPU Pressure? How can I Get Data from a Database on Another Server? What is the Bookmark Lookup and RID Lookup? What is Difference between ROLLBACK IMMEDIATE and WITH NO_WAIT during ALTER DATABASE? What is Difference between GETDATE and SYSDATETIME in SQL Server 2008? How can I Check that whether Automatic Statistic Update is Enabled or not? How to Find Index Size for Each Index on Table? What is the Difference between Seek Predicate and Predicate? What are Basics of Policy Management? What are the Advantages of Policy Management? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 20 of 31 What are Policy Management Terms? What is the ‘FILLFACTOR’? Where in MS SQL Server is ’100’ equal to ‘0’? What are Points to Remember while Using the FILLFACTOR Argument? What is a ROLLUP Clause? What are Various Limitations of the Views? What is a Covered index? When I Delete any Data from a Table, does the SQL Server reduce the size of that table? What are Wait Types? How to Stop Log File Growing too Big? If any Stored Procedure is Encrypted, then can we see its definition in Activity Monitor? 2012 Example of Width Sensitive and Width Insensitive Collation Width Sensitive Collation: A single-byte character (half-width) represented as single-byte and the same character represented as a double-byte character (full-width) are when compared are not equal the collation is width sensitive. In this example we have one table with two columns. One column has a collation of width sensitive and the second column has a collation of width insensitive. Find Column Used in Stored Procedure – Search Stored Procedure for Column Name Very interesting conversation about how to find column used in a stored procedure. There are two different characters in the story and both are having a conversation about how to find column in the stored procedure. Here are two part story Part 1 | Part 2 SQL SERVER – 2012 Functions – FORMAT() and CONCAT() – An Interesting Usage Generate Script for Schema and Data – SQL in Sixty Seconds #021 – Video In simple words, in many cases the database move from one place to another place. It is not always possible to back up and restore databases. There are possibilities when only part of the database (with schema and data) has to be moved. In this video we learn that we can easily generate script for schema for data and move from one server to another one. INFORMATION_SCHEMA.COLUMNS and Value Character Maximum Length -1 I often see the value -1 in the CHARACTER_MAXIMUM_LENGTH column of INFORMATION_SCHEMA.COLUMNS table. I understand that the length of any column can be between 0 to large number but I do not get it when I see value in negative (i.e. -1). Any insight on this subject? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Get the Latest on MySQL Enterprise Edition

    - by monica.kumar
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Font Definitions */ @font-face {font-family:Wingdings; panose-1:5 0 0 0 0 0 0 0 0 0; mso-font-charset:2; mso-generic-font-family:auto; mso-font-pitch:variable; mso-font-signature:0 268435456 0 0 -2147483648 0;}@font-face {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4; mso-font-charset:1; mso-generic-font-family:roman; mso-font-format:other; mso-font-pitch:variable; mso-font-signature:0 0 0 0 0 0;}@font-face {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4; mso-font-charset:0; mso-generic-font-family:swiss; mso-font-pitch:variable; mso-font-signature:-1610611985 1073750139 0 0 159 0;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {mso-style-unhide:no; mso-style-qformat:yes; mso-style-parent:""; margin-top:0in; margin-right:0in; margin-bottom:10.0pt; margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Calibri; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}a:link, span.MsoHyperlink {mso-style-priority:99; color:blue; mso-themecolor:hyperlink; text-decoration:underline; text-underline:single;}a:visited, span.MsoHyperlinkFollowed {mso-style-noshow:yes; mso-style-priority:99; color:purple; mso-themecolor:followedhyperlink; text-decoration:underline; text-underline:single;}p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph {mso-style-priority:34; mso-style-unhide:no; mso-style-qformat:yes; margin-top:0in; margin-right:0in; margin-bottom:10.0pt; margin-left:.5in; mso-add-space:auto; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Calibri; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst {mso-style-priority:34; mso-style-unhide:no; mso-style-qformat:yes; mso-style-type:export-only; margin-top:0in; margin-right:0in; margin-bottom:0in; margin-left:.5in; margin-bottom:.0001pt; mso-add-space:auto; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Calibri; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle {mso-style-priority:34; mso-style-unhide:no; mso-style-qformat:yes; mso-style-type:export-only; margin-top:0in; margin-right:0in; margin-bottom:0in; margin-left:.5in; margin-bottom:.0001pt; mso-add-space:auto; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Calibri; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast {mso-style-priority:34; mso-style-unhide:no; mso-style-qformat:yes; mso-style-type:export-only; margin-top:0in; margin-right:0in; margin-bottom:10.0pt; margin-left:.5in; mso-add-space:auto; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Calibri; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}.MsoChpDefault {mso-style-type:export-only; mso-default-props:yes; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Calibri; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}.MsoPapDefault {mso-style-type:export-only; margin-bottom:10.0pt; line-height:115%;}@page WordSection1 {size:8.5in 11.0in; margin:1.0in 1.0in 1.0in 1.0in; mso-header-margin:.5in; mso-footer-margin:.5in; mso-paper-source:0;}div.WordSection1 {page:WordSection1;} /* List Definitions */ @list l0 {mso-list-id:595597020; mso-list-type:hybrid; mso-list-template-ids:1001697690 67698689 67698691 67698693 67698689 67698691 67698693 67698689 67698691 67698693;}@list l0:level1 {mso-level-number-format:bullet; mso-level-text:?; mso-level-tab-stop:none; mso-level-number-position:left; text-indent:-.25in; font-family:Symbol;}ol {margin-bottom:0in;}ul {margin-bottom:0in;}--Oracle just announced MySQL 5.5 Enterprise Edition. MySQL Enterprise Edition is a comprehensive subscription that includes:- MySQL Database- MySQL Enterprise Backup- MySQL Enterprise Monitor- MySQL Workbench- Oracle Premier Support; 24x7, WorldwideNew in this release is the addition of MySQL Enterprise Backup and MySQL Workbench along with enhancements in MySQL Enterprise Monitor. Recent integration with MyOracle Support allows MySQL customers to access the same support infrastructure used for Oracle Database customers. Joint MySQL and Oracle customers can experience faster problem resolution by using a common technical support interface. Supporting multiple operating systems, including Linux and Windows, MySQL Enterprise Edition can enable customers to achieve up to 90 percent TCO savingsover Microsoft SQL Server. See what Booking.com is saying:“With more than 50 million unique monthly visitors, performance and uptime are our first priorities,” said Bert Lindner, Senior Systems Architect, Booking.com. “The MySQL Enterprise Monitor is an essential tool to monitor, tune and manage our many MySQL instances. It allows us to zoom in quickly on the right areas, so we can spend our time and resources where it matters.” Read the press release for detailson technology enhancements.

    Read the article

  • Converting 2D Physics to 3D.

    - by static void main
    I'm new to game physics and I am trying to adapt a simple 2D ball simulation for a 3D simulation with the Java3D library. I have this problem: Two things: 1) I noted down the values generated by the engine: X/Y are too high and minX/minY/maxY/maxX values are causing trouble. Sometimes the balls are drawing but not moving Sometimes they are going out of the panel Sometimes they're moving on little area Sometimes they just stick at one place... 2) I'm unable to select/define/set the default correct/suitable values considering the 3D graphics scaling/resolution while they are set with respect to 2D screen coordinates, that is my only problem. Please help. This is the code: public class Ball extends GameObject { private float x, y; // Ball's center (x, y) private float speedX, speedY; // Ball's speed per step in x and y private float radius; // Ball's radius // Collision detected by collision detection and response algorithm? boolean collisionDetected = false; // If collision detected, the next state of the ball. // Otherwise, meaningless. private float nextX, nextY; private float nextSpeedX, nextSpeedY; private static final float BOX_WIDTH = 640; private static final float BOX_HEIGHT = 480; /** * Constructor The velocity is specified in polar coordinates of speed and * moveAngle (for user friendliness), in Graphics coordinates with an * inverted y-axis. */ public Ball(String name1,float x, float y, float radius, float speed, float angleInDegree, Color color) { this.x = x; this.y = y; // Convert velocity from polar to rectangular x and y. this.speedX = speed * (float) Math.cos(Math.toRadians(angleInDegree)); this.speedY = speed * (float) Math.sin(Math.toRadians(angleInDegree)); this.radius = radius; } public void move() { if (collisionDetected) { // Collision detected, use the values computed. x = nextX; y = nextY; speedX = nextSpeedX; speedY = nextSpeedY; } else { // No collision, move one step and no change in speed. x += speedX; y += speedY; } collisionDetected = false; // Clear the flag for the next step } public void collideWith() { // Get the ball's bounds, offset by the radius of the ball float minX = 0.0f + radius; float minY = 0.0f + radius; float maxX = 0.0f + BOX_WIDTH - 1.0f - radius; float maxY = 0.0f + BOX_HEIGHT - 1.0f - radius; double gravAmount = 0.9811111f; double gravDir = (90 / 57.2960285258); // Try moving one full step nextX = x + speedX; nextY = y + speedY; System.out.println("In serializedBall in collision."); // If collision detected. Reflect on the x or/and y axis // and place the ball at the point of impact. if (speedX != 0) { if (nextX > maxX) { // Check maximum-X bound collisionDetected = true; nextSpeedX = -speedX; // Reflect nextSpeedY = speedY; // Same nextX = maxX; nextY = (maxX - x) * speedY / speedX + y; // speedX non-zero } else if (nextX < minX) { // Check minimum-X bound collisionDetected = true; nextSpeedX = -speedX; // Reflect nextSpeedY = speedY; // Same nextX = minX; nextY = (minX - x) * speedY / speedX + y; // speedX non-zero } } // In case the ball runs over both the borders. if (speedY != 0) { if (nextY > maxY) { // Check maximum-Y bound collisionDetected = true; nextSpeedX = speedX; // Same nextSpeedY = -speedY; // Reflect nextY = maxY; nextX = (maxY - y) * speedX / speedY + x; // speedY non-zero } else if (nextY < minY) { // Check minimum-Y bound collisionDetected = true; nextSpeedX = speedX; // Same nextSpeedY = -speedY; // Reflect nextY = minY; nextX = (minY - y) * speedX / speedY + x; // speedY non-zero } } speedX += Math.cos(gravDir) * gravAmount; speedY += Math.sin(gravDir) * gravAmount; } public float getSpeed() { return (float) Math.sqrt(speedX * speedX + speedY * speedY); } public float getMoveAngle() { return (float) Math.toDegrees(Math.atan2(speedY, speedX)); } public float getRadius() { return radius; } public float getX() { return x; } public float getY() { return y; } public void setX(float f) { x = f; } public void setY(float f) { y = f; } } Here's how I'm drawing the balls: public class 3DMovingBodies extends Applet implements Runnable { private static final int BOX_WIDTH = 800; private static final int BOX_HEIGHT = 600; private int currentNumBalls = 1; // number currently active private volatile boolean playing; private long mFrameDelay; private JFrame frame; private int currentFrameRate; private Ball[] ball = new Ball[currentNumBalls]; private Random rand; private Sphere[] sphere = new Sphere[currentNumBalls]; private Transform3D[] trans = new Transform3D[currentNumBalls]; private TransformGroup[] objTrans = new TransformGroup[currentNumBalls]; public 3DMovingBodies() { rand = new Random(); float angleInDegree = rand.nextInt(360); setLayout(new BorderLayout()); GraphicsConfiguration config = SimpleUniverse .getPreferredConfiguration(); Canvas3D c = new Canvas3D(config); add("Center", c); ball[0] = new Ball(0.5f, 0.0f, 0.5f, 0.4f, angleInDegree, Color.yellow); // ball[1] = new Ball(1.0f, 0.0f, 0.25f, 0.8f, angleInDegree, // Color.yellow); // ball[2] = new Ball(0.0f, 1.0f, 0.15f, 0.11f, angleInDegree, // Color.yellow); trans[0] = new Transform3D(); // trans[1] = new Transform3D(); // trans[2] = new Transform3D(); sphere[0] = new Sphere(0.5f); // sphere[1] = new Sphere(0.25f); // sphere[2] = new Sphere(0.15f); // Create a simple scene and attach it to the virtual universe BranchGroup scene = createSceneGraph(); SimpleUniverse u = new SimpleUniverse(c); u.getViewingPlatform().setNominalViewingTransform(); u.addBranchGraph(scene); startSimulation(); } public BranchGroup createSceneGraph() { // Create the root of the branch graph BranchGroup objRoot = new BranchGroup(); for (int i = 0; i < currentNumBalls; i++) { // Create a simple shape leaf node, add it to the scene graph. objTrans[i] = new TransformGroup(); objTrans[i].setCapability(TransformGroup.ALLOW_TRANSFORM_WRITE); Transform3D pos1 = new Transform3D(); pos1.setTranslation(randomPos()); objTrans[i].setTransform(pos1); objTrans[i].addChild(sphere[i]); objRoot.addChild(objTrans[i]); } BoundingSphere bounds = new BoundingSphere(new Point3d(0.0, 0.0, 0.0), 100.0); Color3f light1Color = new Color3f(1.0f, 0.0f, 0.2f); Vector3f light1Direction = new Vector3f(4.0f, -7.0f, -12.0f); DirectionalLight light1 = new DirectionalLight(light1Color, light1Direction); light1.setInfluencingBounds(bounds); objRoot.addChild(light1); // Set up the ambient light Color3f ambientColor = new Color3f(1.0f, 1.0f, 1.0f); AmbientLight ambientLightNode = new AmbientLight(ambientColor); ambientLightNode.setInfluencingBounds(bounds); objRoot.addChild(ambientLightNode); return objRoot; } public void startSimulation() { playing = true; Thread t = new Thread(this); t.start(); } public void stop() { playing = false; } public void run() { long previousTime = System.currentTimeMillis(); long currentTime = previousTime; long elapsedTime; long totalElapsedTime = 0; int frameCount = 0; while (true) { currentTime = System.currentTimeMillis(); elapsedTime = (currentTime - previousTime); // elapsed time in // seconds totalElapsedTime += elapsedTime; if (totalElapsedTime > 1000) { currentFrameRate = frameCount; frameCount = 0; totalElapsedTime = 0; } for (int i = 0; i < currentNumBalls; i++) { ball[i].move(); ball[i].collideWith(); drawworld(); } try { Thread.sleep(88); } catch (Exception e) { e.printStackTrace(); } previousTime = currentTime; frameCount++; } } public void drawworld() { for (int i = 0; i < currentNumBalls; i++) { printTG(objTrans[i], "SteerTG"); trans[i].setTranslation(new Vector3f(ball[i].getX(), ball[i].getY(), 0.0f)); objTrans[i].setTransform(trans[i]); } } private Vector3f randomPos() /* * Return a random position vector. The numbers are hardwired to be within * the confines of the box. */ { Vector3f pos = new Vector3f(); pos.x = rand.nextFloat() * 5.0f - 2.5f; // -2.5 to 2.5 pos.y = rand.nextFloat() * 2.0f + 0.5f; // 0.5 to 2.5 pos.z = rand.nextFloat() * 5.0f - 2.5f; // -2.5 to 2.5 return pos; } // end of randomPos() public static void main(String[] args) { System.out.println("Program Started"); 3DMovingBodiesbb = new 3DMovingBodies(); bb.addKeyListener(bb); MainFrame mf = new MainFrame(bb, 600, 400); } }

    Read the article

  • ADDS: 1 - Introducing and designing

    - by marc dekeyser
    Normal 0 false false false EN-GB X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;} What is ADDS?  Every Microsoft oriented infrastructure in today's enterprises will depend largely on the active directory version built by Microsoft. It is the foundation stone on which all other products (Exchange, update services, office communicator, the system center family, etc) rely on to get their information. And that is just looking at it from an infrastructure perspective. A well designed and implemented Active Directory implementation makes life for IT personnel and user alike a lot easier. Centralised management and the abilities opened up  by having it in place are ample.  But what is Active Directory Domain Services? We can look at ADDS as a centralised directory containing all objects your infrastructure runs on in one way or another. Since it is a Microsoft product you'll obviously not be seeing linux or mac clients listed in here (exceptions exist) but in general we can say it contains everything your company has in place in one form or another.  The domain name services. The domain naming service (or DNS for short) is a service which translates IP address (the identifiers for each computer in your domain) into readable and easy to understand names. This service is a prequisite for ADDA to work and having wrong record in a DNS server will make any ADDS service fail. Generally speaking a DNS service will be run on the same server as the ADDS service but it is worth wile to remember that this is not necessary. You could, for example, run your DNS services on a linux box (which would need special preparing to host an ADDS integrated DNS zone) and run the ADDS service of another box… Where to start? If the aim is to put in place a first time implementation of ADDS in your enterprise there are plenty of things to consider depending on what you are going to do in the long run. Great care has to be taken when first designing and implementing as having it set up wrong will cause a headache down the line. It is for that reason that I like to start building from the bottom up and start with a generic installation of ADDS (which will still differ for every client) and make it adaptable for future services which can hook in to the existing environment. Adapting existing environments is out of scope for this document (and series) although it is possible to take the pointers and change your existing environment to run in a smoother manor. Take great care when changing things as one small slip of the hand can give you a forest wide failure… Whenever starting with an ADDS deployment I ask the client the following questions:  What are your long term plans and goals?  How flexible do you want it? Are you currently linux heavy and want to keep this or can we go for an all Microsoft design? Those three questions should give some sort of indicator what direction can be taken and if the client has thought about some things themselves :).  The technical side of things  What is next to consider is what kind of infrastructure is already in place. For these series I'll keep it simple and introduce some general concepts without going in to depth on integrating ADDS with other DNS services.  Building from the ground up means we need to consider our layers on which our infrastructure will rely. In my view that goes as follows:  Network (WAN/LAN links and physical sites DNS Namespacing All in one domain or split up in different domains/forests? Security (both for ADDS and physical sites) The network side of things  Looking at how the network is currently set up can potentially teach us a large deal about the client. Do they have multiple physical site? What network speeds exist between these sites, etc… Depending on this information we will design our site links (which controls replication) in future stages. DNS Namespacing Maybe the single most intresting thing to know is what the domain will be named (ADDS will need a DNS domain with the same name) and where this will be hosted. Note that active directory can be set up with a singe name (aka contoso instead of contoso.com) but it is highly recommended to never do this. If you do end up with a domain like that for some reason there will be a lot of services that are going to give you good grief in the future (exchange being one of them). So one of the best practises would be always to use a double name (contoso.com or contoso.lan for example). Internal namespace A single namespace is just what it sounds like. You have a DNS domain which is different internally from what the client has as an external namespace. f.e. contoso.com as an external name (out on the internet) and contoso.lan on the internal network. his setup is has its advantages in that you have more obscurity from the internet in the DNS side of this but it will require additional work to publish services to the web. External namespace Quite like the internal namespace only here you do not differ the internal namespace of the company from what is known on the internet. In this implementation you would host your own DNS servers for the external domain inside the network. Or in other words, any external computer doing a DNS lookup would contact your internal DNS server for the resolution. Generally speaking this set up is a bad idea from the security side of things. Split DNS Whilst using an external namespace design is fairly easy it involves a lot of security risks. Opening up you ADDS DSN servers for lookups exposes your entire network to the internet and should be avoided at any cost. And that is where the "split DNS" design comes in. In this setup up would still have the same namespace internally and externally but you would be using different DNS servers for lookups on the external network who have no records of your internal resources unless you explicitly publish them. All in one or not? In determining your active directory design you can look at the following possibilities:  Single forest, Single domain Single forest, multiple domains Multiple forests, multiple domains I've listed the possibilities for design in increasing order of administrative magnitude. Microsoft recommends trying to use a single forest, single domain in as much situations as possible. It is, however, always possible that you require your services to be seperated from your users in a resource forest with trusts set up between the different forests. To start out I would go with the single forest design to avoid complexity unless there are strict requirements to have multiple forests. Security What kind of security is required on the domain and does this reflect the physical security on the sites? Not every client can afford to have a domain controller in a secluded server room on every site and it is exactly for that reason that Microsoft introduced the RODC (read only domain controller). A RODC is a domain controller that has been limited in functionality, in essence it will only cache the data you explicitly tell it to cache and in the case of a DC compromise (it being stolen) only a limited number of accounts will need to be affected. Th- Th- Th- That’s all folks! Well at least for now! In future editions of this series we’ll be walking through the different task that need to be done and the thought which needs to be put in to it. But for all editions we’ll be going from the concept of running a single forest, single domain with a split DNS setup… See you next time!

    Read the article

< Previous Page | 618 619 620 621 622 623 624 625 626 627 628 629  | Next Page >