Search Results

Search found 13928 results on 558 pages for 'large scale nat'.

Page 142/558 | < Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >

  • Fast, accurate 2d collision

    - by Neophyte
    I'm working on a 2d topdown shooter, and now need to go beyond my basic rectangle bounding box collision system. I have large levels with many different sprites, all of which are different shapes and sizes. The textures for the sprites are all square png files with transparent backgrounds, so I also need a way to only have a collision when the player walks into the coloured part of the texture, and not the transparent background. I plan to handle collision as follows: Check if any sprites are in range of the player Do a rect bounding box collision test Do an accurate collision (Where I need help) I don't mind advanced techniques, as I want to get this right with all my requirements in mind, but I'm not sure how to approach this. What techniques or even libraries to try. I know that I will probably need to create and store some kind of shape that accurately represents each sprite minus the transparent background. I've read that per pixel is slow, so given my large levels and number of objects I don't think that would be suitable. I've also looked at Box2d, but haven't been able to find much documentation, or any examples of how to get it up and running with SFML.

    Read the article

  • Very basic beginner Ruby question to do with elsif and ranges [migrated]

    - by MattKneale
    I've been trying to get to grasps with Ruby (for all of an hour) and this is my first language. I've got the following code: var_comparison = 5 print "Please enter a number: " my_num = Integer(gets.chomp) if my_num > var_comparison print "You picked a number greater than 5!" elsif my_num < var_comparison print "You picked a number less than 5!" elsif my_num > 99 print "Your number is too large, man." else print "You picked the number 5!" end Clearly the interpreter has no way of distinguishing between accepting the rule 5 or 99. How do I make it so that any number between 6-99 returns "You picked a number greater than 5!", but a number 100 or greater returns "Your number is too large, man!"? Do I need to specifically state a range somehow? How would I best do that? Would it by the normal range methods e.g. if my_num 6..99 or if my_num.between(6..99) ?

    Read the article

  • Should I avoid or embrace asking questions of other developers on the job?

    - by T.K.
    As a CS undergraduate, the people around me are either learning or are paid to teach me, but as a software developer, the people around me have tasks of their own. They aren't paid to teach me, and conversely, I am paid to contribute. When I first started working as a software developer co-op, I was introduced to a huge code base written in a language I had never used before. I had plenty of questions, but didn't want to bother my co-workers with all of them - it wasted their time and hurt my pride. Instead, I spent a lot of time bouncing between IDE and browser, trying to make sense of what had already been written and differentiate between expected behavior and symptoms of bugs. I'd ask my co-workers when I felt that the root of my lack of understanding was an in-house concept that I wouldn't find on the internet, but aside from that, I tried to confine my questions to lunch hours. Naturally, there were occasions where I wasted time trying to understand something in code on the internet that had, at its heart, an in-house concept, but overall, I felt I was productive enough during my first semester, contributing about as much as one could expect and gaining a pretty decent understanding of large parts of the product. I was wondering what senior developers felt about that mindset. Should new developers ask more questions to get to speed faster, or should they do their own research for themselves? I see benefits to both mindsets, and anticipate a large variety of responses, but I figure new developers might appreciate your answers without thinking to ask this question.

    Read the article

  • Salary Negotiation; How Best to Broach the Subject? [closed]

    - by Ed S.
    So I have an upcoming performance review / salary increase and I am at a point in which I believe I will need to negotiate a larger raise than what is to be proposed. As I suspect this may be the case I have been reading as much information on the subject (negotiation) as possible. I work for a great company and fortunately I work under some really talented and reasonable managers. Unfortunately, I am not sure how best to bring up the subject. I don't want to sound greedy and I don't want to start off on the wrong foot. For the sake of argument, assume that I am actually worth more than I am being paid at the moment and I would like to make a counter offer for a relatively large increase (say, boss says 4%, I would like to counter with 15%. I know that seems very large, but I believe I have a case for it.) My question to you, those who are/have been on the other side of this scenario, is how should I start the conversation? What approach would make you most receptive to my plea? I've never negotiated before and I just don't want to start off on the wrong foot. My direct manager is a very straightforward individual, so sugarcoating is not necessary here, but at the same time, I don't want to seem overly aggressive or demanding. Thanks in advance for any advice you can offer.

    Read the article

  • MySQL December Webinars

    - by Bertrand Matthelié
    We'll be running 3 webinars next week and hope many of you will be able to join us: MySQL Replication: Simplifying Scaling and HA with GTIDs Wednesday, December 12, at 15.00 Central European TimeJoin the MySQL replication developers for a deep dive into the design and implementation of Global Transaction Identifiers (GTIDs) and how they enable users to simplify MySQL scaling and HA. GTIDs are one of the most significant new replication capabilities in MySQL 5.6, making it simple to track and compare replication progress between the master and slave servers. Register Now MySQL 5.6: Building the Next Generation of Web/Cloud/SaaS/Embedded Applications and Services Thursday, December 13, at 9.00 am Pacific Time As the world's most popular web database, MySQL has quickly become the leading cloud database, with most providers offering MySQL-based services. Indeed, built to deliver web-based applications and to scale out, MySQL's architecture and features make the database a great fit to deliver cloud-based applications. In this webinar we will focus on the improvements in MySQL 5.6 performance, scalability, and availability designed to enable DBA and developer agility in building the next generation of web-based applications. Register Now Getting the Best MySQL Performance in Your Products: Part IV, Partitioning Friday, December 14, at 9.00 am Pacific Time We're adding Partitioning to our extremely popular "Getting the Best MySQL Performance in Your Products" webinar series. Partitioning can greatly increase the performance of your queries, especially when doing full table scans over large tables. Partitioning is also an excellent way to manage very large tables. It's one of the best ways to build higher performance into your product's embedded or bundled MySQL, and particularly for hardware-constrained appliances and devices. Register Now We have live Q&A during all webinars so you'll get the opportunity to ask your questions!

    Read the article

  • When to open source a project under development? [closed]

    - by QuasarDonkey
    Possible Duplicate: Is it OK to push my code to GitHub while it is still in early development? I've been working on a hobby project for a few months now; it's clocking in at over 15000 source lines of code. A number of people have expressed interest in joining development, and I have full intentions of going open source, since it would not be feasible for me to complete the project alone. I'm just not sure when to open-source it. For context, I've notice many successful open source projects, such as the Linux kernel, had considerable work done before they were open-sourced. In my case, I'd been planning on open-sourcing it after I complete all the underlying libraries and overall architecture. Is this a mistake; should I just release it right now? I'm worried that since certain critical underlying components haven't been finalized, if people build a large codebase around them, it will be very difficult to change or fix things later. On the other hand, it's a very large project that will require multiple developers to complete in a reasonable time. So when is the right time during development to go open source? Preferably, I'd like to hear from some folks who have started their own projects.

    Read the article

  • I am being paid very little(imo), how can I change this? [migrated]

    - by LagWagon
    I am a web developer with about 4 years of relevant work experience in my field. Recently, I went from making $30/hr working from home contracting for large companies to a full time job that only pays 40k/yr. The company I work for now is great, nice people, but a little behind the times. I joined on with very little experience in SQL development but they put me in charge of querying the DB and making reports right away, so I had to go in head first and pick up that skill right away. Which is great, I'm happy I learned more of that, and really make good time when doing SQL now. However, I'm now doing most of their advanced SQL stuff. The day I started, another employee who was running a MVC project based in Yii (which is the sole item that makes this company software) put in his two weeks. Two weeks later, I'm the only one who knows how to use, access, modify, or update this project. Its quite a large responsibility for an "entry level dev", no? I am doing highly advanced jQuery for them to modernize their forms, webpages, amongst other things, a skill that I would bet on few Entry levels being able to do as well as me. I may be wrong, but I feel that what I'm making now is not acceptable. We don't have reviews, ever, so I can't just wait for that.. so I was wondering.. do I sound justified in wanting to be paid more, and how can I make this happen?

    Read the article

  • UIScrollView zoomToRect not zooming to given rect (created from UITouch CGPoint)

    - by pmhart
    My application has a UIScrollView with one subview. The subview is an extended UIView which prints a PDF page to itself using layers in the drawLayer event. Zooming using the built in pinching works great. setZoomScale also works as expected. I have been struggling with the zoomToRect function. I found an example online which makes a CGRect zoomRect variable from a given CGPoint. In the touchesEnded function, if there was a double tap and they are all the way zoomed out, I want to zoom in to that PDFUIView I created as though they were pinching out with the center of the pinch where they double tapped. So assume that I pass the UITouch variable to my function which utilizes zoomToRect if they double tap. I started with the following function I found on apples site: http://developer.apple.com/iphone/library/documentation/WindowsViews/Conceptual/UIScrollView_pg/ZoomZoom/ZoomZoom.html The following is a modified version for my UIScrollView extended class: - (void)zoomToCenter:(float)scale withCenter:(CGPoint)center { CGRect zoomRect; zoomRect.size.height = self.frame.size.height / scale; zoomRect.size.width = self.frame.size.width / scale; zoomRect.origin.x = center.x - (zoomRect.size.width / 2.0); zoomRect.origin.y = center.y - (zoomRect.size.height / 2.0); //return zoomRect; [self zoomToRect:zoomRect animated:YES]; } When I do this, the UIScrollView seems to zoom using the bottom right edge of the zoomRect above and not the center. If I make UIView like this UIView *v = [[UIView alloc] initWithFrame:zoomRect]; [v setBackgroundColor:[UIView redColor]]; [self addSubview:v]; The red box shows up with the touch point dead in the center. Please note: I am writing this from my PC, I recall messing around with the divided by two part on my Mac, so just assume that this draws a rect with the touch point in the center. If the UIView drew off center but zoomed to the right spot it would be all good. However, what happens is when it preforms the zoomToRect it seems to use the bottom right off the zoomRect at the top left of the zoomed in results. Also, I noticed that depending on where I click on the UIScrollView, it anchors to diffrent spots. It almost seems like there is a cross down the middle and it's reflecting the points somehow as though anywhere left of the middle is a negative reflection and anywhere right of the middle is a positive reflection? This seems to complicated, shouldn't it just zoom to the rect that was drawn as the UIView was able to draw? I used a lot of research to figure out how to create a PDF that scales in high quality, so I am assuming that using the CALayer may be throwing off the coordinate system? But to the UIScrollView it should just treat it as a view with 768x985 dimensions. This is sort of advanced, please assume the code for creating the zoomRect is all good. There is something deeper with the CALayer in the UIView which is in the UIScrollView....

    Read the article

  • SqlBulkCopy is slow, doesn't utilize full network speed

    - by Alex
    Hi, for that past couple of weeks I have been creating generic script that is able to copy databases. The goal is to be able to specify any database on some server and copy it to some other location, and it should only copy the specified content. The exact content to be copied over is specified in a configuration file. This script is going to be used on some 10 different databases and run weekly. And in the end we are copying only about 3%-20% of databases which are as large as 500GB. I have been using the SMO assemblies to achieve this. This is my first time working with SMO and it took a while to create generic way to copy the schema objects, filegroups ...etc. (Actually helped find some bad stored procs). Overall I have a working script which is lacking on performance (and at times times out) and was hoping you guys would be able to help. When executing the WriteToServer command to copy large amount of data ( 6GB) it reaches my timeout period of 1hr. Here is the core code for copying table data. The script is written in PowerShell. $query = ("SELECT * FROM $selectedTable " + $global:selectiveTables.Get_Item($selectedTable)).Trim() Write-LogOutput "Copying $selectedTable : '$query'" $cmd = New-Object Data.SqlClient.SqlCommand -argumentList $query, $source $cmd.CommandTimeout = 120; $bulkData = ([Data.SqlClient.SqlBulkCopy]$destination) $bulkData.DestinationTableName = $selectedTable; $bulkData.BulkCopyTimeout = $global:tableCopyDataTimeout # = 3600 $reader = $cmd.ExecuteReader(); $bulkData.WriteToServer($reader); # Takes forever here on large tables The source and target databases are located on different servers so I kept track of the network speed as well. The network utilization never went over 1% which was quite surprising to me. But when I just transfer some large files between the servers, the network utilization spikes up to 10%. I have tried setting the $bulkData.BatchSize to 5000 but nothing really changed. Increasing the BulkCopyTimeout to an even greater amount would only solve the timeout. I really would like to know why the network is not being used fully. Anyone else had this problem? Any suggestions on networking or bulk copy will be appreciated. And please let me know if you need more information. Thanks. UPDATE I have tweaked several options that increase the performance of SqlBulkCopy, such as setting the transaction logging to simple and providing a table lock to SqlBulkCopy instead of the default row lock. Also some tables are better optimized for certain batch sizes. Overall, the duration of the copy was decreased by some 15%. And what we will do is execute the copy of each database simultaneously on different servers. But I am still having a timeout issue when copying one of the databases. When copying one of the larger databases, there is a table for which I consistently get the following exception: System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. It is thrown about 16 after it starts copying the table which is no where near my BulkCopyTimeout. Even though I get the exception that table is fully copied in the end. Also, if I truncate that table and restart my process for that table only, the tables is copied over without any issues. But going through the process of copying that entire database fails always for that one table. I have tried executing the entire process and reseting the connection before copying that faulty table, but it still errored out. My SqlBulkCopy and Reader are closed after each table. Any suggestions as to what else could be causing the script to fail at the point each time?

    Read the article

  • Interactive Data Language, IDL: Does anybody care?

    - by Alex
    Anyone use a language called Interactive Data Language, IDL? It is popular with scientists. I think it is a poor language because it is proprietary (every terminal running it has to have an expensive license purchased) and it has minimal support (try searching for IDL, the language, right now on stack) . I am trying to convince my colleagues to stop using it and learn C/C++/Python/Fortran/Java/Ruby. Does anybody know about or even care about IDL enough to have opinions on it? What do you think of it? Should I tell my colleagues to stop wasting their time on it now? How can I convince them? Edit: People are getting the impression that I don't know or use IDL. Also, I said IDL has minimal support which is true in one sense, so I must clarify that the scientific libraries are indeed large. I use IDL all the time, but this is exactly the problem: I am only using IDL because colleagues use it. There is a file format IDL uses, the .sav, which can only be opened in IDL. So I must use IDL to work with this data and transfer the data back to colleagues, but I know I would be more efficient in another language. This is like someone sending you a microsoft word file in an email attachment and if you don't understand how wrong that is then you probably write too many words not enough code and you bought microsoft word. Edit: As an alternative to IDL Python is popular. Here is a list of The Pros of IDL (and the cons) from AstroBetter: Pros of IDL Mature many numerical and astronomical libraries available Wide astronomical user base Numerical aspect well integrated with language itself Many local users with deep experience Faster for small arrays Easier installation Good, unified documentation Standard GUI run/debug tool (IDLDE) Single widget system (no angst about which to choose or learn) SAVE/RESTORE capability Use of keyword arguments as flags more convenient Cons of IDL Narrow applicability, not well suited to general programming Slower for large arrays Array functionality less powerful Table support poor Limited ability to extend using C or Fortran, such extensions hard to distribute and support Expensive, sometimes problem collaborating with others that don’t have or can’t afford licenses. Closed source (only RSI can fix bugs) Very awkward to integrate with IRAF tasks Memory management more awkward Single widget system (useless if working within another framework) Plotting: Awkward support for symbols and math text Many font systems, portability issues (v5.1 alleviates somewhat) not as flexible or as extensible plot windows not intrinsically interactive (e.g., pan & zoom) Pros of Python Very general and powerful programming language, yet easy to learn. Strong, but optional, Object Oriented programming support Very large user and developer community, very extensive and broad library base Very extensible with C, C++, or Fortran, portable distribution mechanisms available Free; non-restrictive license; Open Source Becoming the standard scripting language for astronomy Easy to use with IRAF tasks Basis of STScI application efforts More general array capabilities Faster for large arrays, better support for memory mapping Many books and on-line documentation resources available (for the language and its libraries) Better support for table structures Plotting framework (matplotlib) more extensible and general Better font support and portability (only one way to do it too) Usable within many windowing frameworks (GTK, Tk, WX, Qt…) Standard plotting functionality independent of framework used plots are embeddable within other GUIs more powerful image handling (multiple simultaneous LUTS, optional resampling/rescaling, alpha blending, etc) Support for many widget systems Strong local influence over capabilities being developed for Python Cons of Python More items to install separately Not as well accepted in astronomical community (but support clearly growing) Scientific libraries not as mature: Documentation not as complete, not as unified Not as deep in astronomical libraries and utilities Not all IDL numerical library functions have corresponding functionality in Python Some numeric constructs not quite as consistent with language (or slightly less convenient than IDL) Array indexing convention “backwards” Small array performance slower No standard GUI run/debug tool Support for many widget systems (angst regarding which to choose) Current lack of function equivalent to SAVE/RESTORE in IDL matplotlib does not yet have equivalents for all IDL 2-D plotting capability (e.g., surface plots) Use of keyword arguments used as flags less convenient Plotting: comparatively immature, still much development going on missing some plot type (e.g., surface) 3-d capability requires VTK (though matplotlib has some basic 3-d capability)

    Read the article

  • TSQL Shred XML - Working with namespaces (newbie @ shredding XML)

    - by drachenstern
    Here's a link to my previous question on this same block of code with a working shred example Ok, I'm a C# ASP.NET dev following orders: The orders are to take a given dataset, shred the XML and return columns. I've argued that it's easier to do the shredding on the ASP.NET side where we already have access to things like deserializers, etc, and the entire complex of known types, but no, the boss says "shred it on the server, return a dataset, bind the dataset to the columns of the gridview" so for now, I'm doing what I was told. This is all to head off the folks who will come along and say "bad requirements". Task at hand: Current code that doesn't work: And if we modify the previous post to include namespaces on the XML elements, we lose the functionality that the previous post has... DECLARE @table1 AS TABLE ( ProductID VARCHAR(10) , Name VARCHAR(20) , Color VARCHAR(20) , UserEntered VARCHAR(20) , XmlField XML ) INSERT INTO @table1 SELECT '12345','ball','red','john','<sizes xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><size xmlns="http://example.com/ns" name="medium"><price>10</price></size><size xmlns="http://example.com/ns" name="large"><price>20</price></size></sizes>' INSERT INTO @table1 SELECT '12346','ball','blue','adam','<sizes xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><size xmlns="http://example.com/ns" name="medium"><price>12</price></size><size xmlns="http://example.com/ns" name="large"><price>25</price></size></sizes>' INSERT INTO @table1 SELECT '12347','ring','red','john','<sizes xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><size xmlns="http://example.com/ns" name="medium"><price>5</price></size><size xmlns="http://example.com/ns" name="large"><price>8</price></size></sizes>' INSERT INTO @table1 SELECT '12348','ring','blue','adam','<sizes xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><size xmlns="http://example.com/ns" name="medium"><price>8</price></size><size xmlns="http://example.com/ns" name="large"><price>10</price></size></sizes>' INSERT INTO @table1 SELECT '23456','auto','black','ann','<auto xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><type xmlns="http://example.com/ns">car</type><wheels xmlns="http://example.com/ns">4</wheels><doors xmlns="http://example.com/ns">4</doors><cylinders xmlns="http://example.com/ns">3</cylinders></auto>' INSERT INTO @table1 SELECT '23457','auto','black','ann','<auto xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><type xmlns="http://example.com/ns">truck</type><wheels xmlns="http://example.com/ns">4</wheels><doors xmlns="http://example.com/ns">2</doors><cylinders xmlns="http://example.com/ns">8</cylinders></auto><auto xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><type xmlns="http://example.com/ns">car</type><wheels xmlns="http://example.com/ns">4</wheels><doors xmlns="http://example.com/ns">4</doors><cylinders xmlns="http://example.com/ns">6</cylinders></auto>' DECLARE @x XML -- I think I'm supposed to use WITH XMLNAMESPACES(...) here but I don't know how SELECT @x = ( SELECT ProductID , Name , Color , UserEntered , XmlField.query(' for $vehicle in //auto return <auto type = "{$vehicle/type}" wheels = "{$vehicle/wheels}" doors = "{$vehicle/doors}" cylinders = "{$vehicle/cylinders}" />') FROM @table1 table1 WHERE Name = 'auto' FOR XML AUTO ) SELECT @x SELECT ProductID = T.Item.value('../@ProductID', 'varchar(10)') , Name = T.Item.value('../@Name', 'varchar(20)') , Color = T.Item.value('../@Color', 'varchar(20)') , UserEntered = T.Item.value('../@UserEntered', 'varchar(20)') , VType = T.Item.value('@type' , 'varchar(10)') , Wheels = T.Item.value('@wheels', 'varchar(2)') , Doors = T.Item.value('@doors', 'varchar(2)') , Cylinders = T.Item.value('@cylinders', 'varchar(2)') FROM @x.nodes('//table1/auto') AS T(Item) If my previous post shows there's a much better way to do this, then I really need to revise this question as well, but on the off chance this coding-style is good, I can probably go ahead with this as-is... Any takers?

    Read the article

  • TSQL Shred XML - Is this right or is there a better way (newbie @ shredding XML)

    - by drachenstern
    Ok, I'm a C# ASP.NET dev following orders: The orders are to take a given dataset, shred the XML and return columns. I've argued that it's easier to do the shredding on the ASP.NET side where we already have access to things like deserializers, etc, and the entire complex of known types, but no, the boss says "shred it on the server, return a dataset, bind the dataset to the columns of the gridview" so for now, I'm doing what I was told. This is all to head off the folks who will come along and say "bad requirements". Task at hand: Here's my code that works and does what I want it to: DECLARE @table1 AS TABLE ( ProductID VARCHAR(10) , Name VARCHAR(20) , Color VARCHAR(20) , UserEntered VARCHAR(20) , XmlField XML ) INSERT INTO @table1 SELECT '12345','ball','red','john','<sizes><size name="medium"><price>10</price></size><size name="large"><price>20</price></size></sizes>' INSERT INTO @table1 SELECT '12346','ball','blue','adam','<sizes><size name="medium"><price>12</price></size><size name="large"><price>25</price></size></sizes>' INSERT INTO @table1 SELECT '12347','ring','red','john','<sizes><size name="medium"><price>5</price></size><size name="large"><price>8</price></size></sizes>' INSERT INTO @table1 SELECT '12348','ring','blue','adam','<sizes><size name="medium"><price>8</price></size><size name="large"><price>10</price></size></sizes>' INSERT INTO @table1 SELECT '23456','auto','black','ann','<auto><type>car</type><wheels>4</wheels><doors>4</doors><cylinders>3</cylinders></auto>' INSERT INTO @table1 SELECT '23457','auto','black','ann','<auto><type>truck</type><wheels>4</wheels><doors>2</doors><cylinders>8</cylinders></auto><auto><type>car</type><wheels>4</wheels><doors>4</doors><cylinders>6</cylinders></auto>' DECLARE @x XML SELECT @x = ( SELECT ProductID , Name , Color , UserEntered , XmlField.query(' for $vehicle in //auto return <auto type = "{$vehicle/type}" wheels = "{$vehicle/wheels}" doors = "{$vehicle/doors}" cylinders = "{$vehicle/cylinders}" />') FROM @table1 table1 WHERE Name = 'auto' FOR XML AUTO ) SELECT @x SELECT ProductID = T.Item.value('../@ProductID', 'varchar(10)') , Name = T.Item.value('../@Name', 'varchar(20)') , Color = T.Item.value('../@Color', 'varchar(20)') , UserEntered = T.Item.value('../@UserEntered', 'varchar(20)') , VType = T.Item.value('@type' , 'varchar(10)') , Wheels = T.Item.value('@wheels', 'varchar(2)') , Doors = T.Item.value('@doors', 'varchar(2)') , Cylinders = T.Item.value('@cylinders', 'varchar(2)') FROM @x.nodes('//table1/auto') AS T(Item) SELECT @x = ( SELECT ProductID , Name , Color , UserEntered , XmlField.query(' for $object in //sizes/size return <size name = "{$object/@name}" price = "{$object/price}" />') FROM @table1 table1 WHERE Name IN ('ring', 'ball') FOR XML AUTO ) SELECT @x SELECT ProductID = T.Item.value('../@ProductID', 'varchar(10)') , Name = T.Item.value('../@Name', 'varchar(20)') , Color = T.Item.value('../@Color', 'varchar(20)') , UserEntered = T.Item.value('../@UserEntered', 'varchar(20)') , SubName = T.Item.value('@name' , 'varchar(10)') , Price = T.Item.value('@price', 'varchar(2)') FROM @x.nodes('//table1/size') AS T(Item) So for now, I'm trying to figure out if there's a better way to write the code than what I'm doing now... (I have a part 2 I'm about to go key in)

    Read the article

  • Resumable upload from Java client to Grails web application?

    - by dersteps
    After almost 2 workdays of Googling and trying several different possibilities I found throughout the web, I'm asking this question here, hoping that I might finally get an answer. First of all, here's what I want to do: I'm developing a client and a server application with the purpose of exchanging a lot of large files between multiple clients on a single server. The client is developed in pure Java (JDK 1.6), while the web application is done in Grails (2.0.0). As the purpose of the client is to allow users to exchange a lot of large files (usually about 2GB each), I have to implement it in a way, so that the uploads are resumable, i.e. the users are able to stop and resume uploads at any time. Here's what I did so far: I actually managed to do what I wanted to do and stream large files to the server while still being able to pause and resume uploads using raw sockets. I would send a regular request to the server (using Apache's HttpClient library) to get the server to send me a port that was free for me to use, then open a ServerSocket on the server and connect to that particular socket from the client. Here's the problem with that: Actually, there are at least two problems with that: I open those ports myself, so I have to manage open and used ports myself. This is quite error-prone. I actually circumvent Grails' ability to manage a huge amount of (concurrent) connections. Finally, here's what I'm supposed to do now and the problem: As the problems I mentioned above are unacceptable, I am now supposed to use Java's URLConnection/HttpURLConnection classes, while still sticking to Grails. Connecting to the server and sending simple requests is no problem at all, everything worked fine. The problems started when I tried to use the streams (the connection's OutputStream in the client and the request's InputStream in the server). Opening the client's OutputStream and writing data to it is as easy as it gets. But reading from the request's InputStream seems impossible to me, as that stream is always empty, as it seems. Example Code Here's an example of the server side (Groovy controller): def test() { InputStream inStream = request.inputStream if(inStream != null) { int read = 0; byte[] buffer = new byte[4096]; long total = 0; println "Start reading" while((read = inStream.read(buffer)) != -1) { println "Read " + read + " bytes from input stream buffer" //<-- this is NEVER called } println "Reading finished" println "Read a total of " + total + " bytes" // <-- 'total' will always be 0 (zero) } else { println "Input Stream is null" // <-- This is NEVER called } } This is what I did on the client side (Java class): public void connect() { final URL url = new URL("myserveraddress"); final byte[] message = "someMessage".getBytes(); // Any byte[] - will be a file one day HttpURLConnection connection = url.openConnection(); connection.setRequestMethod("GET"); // other methods - same result // Write message DataOutputStream out = new DataOutputStream(connection.getOutputStream()); out.writeBytes(message); out.flush(); out.close(); // Actually connect connection.connect(); // is this placed correctly? // Get response BufferedReader in = new BufferedReader(new InputStreamReader(connection.getInputStream())); String line = null; while((line = in.readLine()) != null) { System.out.println(line); // Prints the whole server response as expected } in.close(); } As I mentioned, the problem is that request.inputStream always yields an empty InputStream, so I am never able to read anything from it (of course). But as that is exactly what I'm trying to do (so I can stream the file to be uploaded to the server, read from the InputStream and save it to a file), this is rather disappointing. I tried different HTTP methods, different data payloads, and also rearranged the code over and over again, but did not seem to be able to solve the problem. What I hope to find I hope to find a solution to my problem, of course. Anything is highly appreciated: hints, code snippets, library suggestions and so on. Maybe I'm even having it all wrong and need to go in a totally different direction. So, how can I implement resumable file uploads for rather large (binary) files from a Java client to a Grails web application without manually opening ports on the server side?

    Read the article

  • Using JavaScript, how do I write the same text to multiple HTML elements, or how do I write text to all HTML elements of the same class?

    - by myfavoritenoisemaker
    I am writing this program to take a root music note and populate tables with various scales from that root note. So, many of the tables cells will have the exact same value in them. I realize I can call my "useScale" function for every single that I need to write text to but since there will be repeats, it seemed like there should be a way to run my function once and apply the results to multiple but it did not work to use the document.getElementsByClassName("").innerHTML, I had been using "ById" which worked fine but each ID must be unique so, I can't write to multiple elements. Here's my code, I'd love some suggestions. many thanks Root Note <input type="text" name="defineRootNote" id="rootNoteCapture" size="2"/> <button onclick="findScale()">Submit</button> <table id="majorTriad"> <th>Major Triad</th> <tr><td>1st</td><td class="root"> </td></tr> <tr><td>3rd</td><td class="3rd"> </td></tr> <tr><td>5th</td><td class="5th"> </td></tr> </table> <table id="minorTriad"> <th>Minor Triad</th> <tr><td>1st</td><td class="root"> </td></tr> <tr><td>3 Flat</td><td class="3Flat"> </td></tr> <tr><td>5th</td><td class="5th"> </td></tr> </table> <script type="text/javascript"> function findScale(rootNote){ var rootNote = document.getElementById("rootNoteCapture").value; rootNote = rootNote.toUpperCase(); var scaleCheck = ["A", "A#", "AB", "B", "BB", "C", "C#", "D", "D#", "DB", "E", "EB", "F", "F#", "G", "G#", "GB"]; if (scaleCheck.indexOf(rootNote) == -1) { document.getElementById("root").innerHTML = "Invalid Entry"; } else { switch(rootNote){ case "AB": rootNote = "G#"; break; case "BB": rootNote = "A#"; break; case "DB": rootNote = "C#"; break; case "EB": rootNote = "D#"; break; case "GB": rootNote = "F#"; break; rootNote = rootNote; } document.getElementsByClassName("root").innerHTML = rootNote; document.getElementsByClassName("3rd").innerHTML = useScale(rootNote, 4); document.getElementsByClassName("5th").innerHTML = useScale(rootNote, 7); document.getElementsByClassName("3Flat").innerHTML = useScale(rootNote, 3); } } function useScale(startPoint, offset){ var scale = ["A", "A#", "B", "C", "C#", "D", "D#", "E", "F", "F#", "G", "G#"]; var returnNote = null; var scalePoint = scale.indexOf(startPoint); for (var i = 0; i < offset; ){ i = i + 1; //console.log(i); //console.log(scalePoint); scalePoint ++; if (scalePoint > 11) {scalePoint = 0;} } returnNote = scale[scalePoint]; return returnNote; } </script>

    Read the article

  • Oracle Announces Oracle Exadata X3 Database In-Memory Machine

    - by jgelhaus
    Fourth Generation Exadata X3 Systems are Ideal for High-End OLTP, Large Data Warehouses, and Database Clouds; Eighth-Rack Configuration Offers New Low-Cost Entry Point ORACLE OPENWORLD, SAN FRANCISCO – October 1, 2012 News Facts During his opening keynote address at Oracle OpenWorld, Oracle CEO, Larry Ellison announced the Oracle Exadata X3 Database In-Memory Machine - the latest generation of its Oracle Exadata Database Machines. The Oracle Exadata X3 Database In-Memory Machine is a key component of the Oracle Cloud. Oracle Exadata X3-2 Database In-Memory Machine and Oracle Exadata X3-8 Database In-Memory Machine can store up to hundreds of Terabytes of compressed user data in Flash and RAM memory, virtually eliminating the performance overhead of reads and writes to slow disk drives, making Exadata X3 systems the ideal database platforms for the varied and unpredictable workloads of cloud computing. In order to realize the highest performance at the lowest cost, the Oracle Exadata X3 Database In-Memory Machine implements a mass memory hierarchy that automatically moves all active data into Flash and RAM memory, while keeping less active data on low-cost disks. With a new Eighth-Rack configuration, the Oracle Exadata X3-2 Database In-Memory Machine delivers a cost-effective entry point for smaller workloads, testing, development and disaster recovery systems, and is a fully redundant system that can be used with mission critical applications. Next-Generation Technologies Deliver Dramatic Performance Improvements Oracle Exadata X3 Database In-Memory Machines use a combination of scale-out servers and storage, InfiniBand networking, smart storage, PCI Flash, smart memory caching, and Hybrid Columnar Compression to deliver extreme performance and availability for all Oracle Database Workloads. Oracle Exadata X3 Database In-Memory Machine systems leverage next-generation technologies to deliver significant performance enhancements, including: Four times the Flash memory capacity of the previous generation; with up to 40 percent faster response times and 100 GB/second data scan rates. Combined with Exadata’s unique Hybrid Columnar Compression capabilities, hundreds of Terabytes of user data can now be managed entirely within Flash; 20 times more capacity for database writes through updated Exadata Smart Flash Cache software. The new Exadata Smart Flash Cache software also runs on previous generation Exadata systems, increasing their capacity for writes tenfold; 33 percent more database CPU cores in the Oracle Exadata X3-2 Database In-Memory Machine, using the latest 8-core Intel® Xeon E5-2600 series of processors; Expanded 10Gb Ethernet connectivity to the data center in the Oracle Exadata X3-2 provides 40 10Gb network ports per rack for connecting users and moving data; Up to 30 percent reduction in power and cooling. Configured for Your Business, Available Today Oracle Exadata X3-2 Database In-Memory Machine systems are available in a Full-Rack, Half-Rack, Quarter-Rack, and the new low-cost Eighth-Rack configuration to satisfy the widest range of applications. Oracle Exadata X3-8 Database In-Memory Machine systems are available in a Full-Rack configuration, and both X3 systems enable multi-rack configurations for virtually unlimited scalability. Oracle Exadata X3-2 and X3-8 Database In-Memory Machines are fully compatible with prior Exadata generations and existing systems can also be upgraded with Oracle Exadata X3-2 servers. Oracle Exadata X3 Database In-Memory Machine systems can be used immediately with any application certified with Oracle Database 11g R2 and Oracle Real Application Clusters, including SAP, Oracle Fusion Applications, Oracle’s PeopleSoft, Oracle’s Siebel CRM, the Oracle E-Business Suite, and thousands of other applications. Supporting Quotes “Forward-looking enterprises are moving towards Cloud Computing architectures,” said Andrew Mendelsohn, senior vice president, Oracle Database Server Technologies. “Oracle Exadata’s unique ability to run any database application on a fully scale-out architecture using a combination of massive memory for extreme performance and low-cost disk for high capacity delivers the ideal solution for Cloud-based database deployments today.” Supporting Resources Oracle Press Release Oracle Exadata Database Machine Oracle Exadata X3-2 Database In-Memory Machine Oracle Exadata X3-8 Database In-Memory Machine Oracle Database 11g Follow Oracle Database via Blog, Facebook and Twitter Oracle OpenWorld 2012 Oracle OpenWorld 2012 Keynotes Like Oracle OpenWorld on Facebook Follow Oracle OpenWorld on Twitter Oracle OpenWorld Blog Oracle OpenWorld on LinkedIn Mark Hurd's keynote with Andy Mendelsohn and Juan Loaiza - - watch for the replay to be available soon at http://www.youtube.com/user/Oracle or http://www.oracle.com/openworld/live/on-demand/index.html

    Read the article

  • Windows Azure Use Case: Web Applications

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Many applications have a requirement to be located outside of the organization’s internal infrastructure control. For instance, the company website for a brick-and-mortar retail company may want to post not only static but interactive content to be available to their external customers, and not want the customers to have access inside the organization’s firewall. There are also cases of pure web applications used for a great many of the internal functions of the business. This allows for remote workers, shared customer/employee workloads and data and other advantages. Some firms choose to host these web servers internally, others choose to contract out the infrastructure to an “ASP” (Application Service Provider) or an Infrastructure as a Service (IaaS) company. In any case, the design of these applications often resembles the following: In this design, a server (or perhaps more than one) hosts the presentation function (http or https) access to the application, and this same system may hold the computational aspects of the program. Authorization and Access is controlled programmatically, or is more open if this is a customer-facing application. Storage is either placed on the same or other servers, hosted within an RDBMS or NoSQL database, or a combination of the options, all coded into the application. High-Availability within this scenario is often the responsibility of the architects of the application, and by purchasing more hosting resources which must be built, licensed and configured, and manually added as demand requires, although some IaaS providers have a partially automatic method to add nodes for scale-out, if the architecture of the application supports it. Disaster Recovery is the responsibility of the system architect as well. Implementation: In a Windows Azure Platform as a Service (PaaS) environment, many of these architectural considerations are designed into the system. The Azure “Fabric” (not to be confused with the Azure implementation of Application Fabric - more on that in a moment) is designed to provide scalability. Compute resources can be added and removed programmatically based on any number of factors. Balancers at the request-level of the Fabric automatically route http and https requests. The fabric also provides High-Availability for storage and other components. Disaster recovery is a shared responsibility between the facilities (which have the ability to restore in case of catastrophic failure) and your code, which should build in recovery. In a Windows Azure-based web application, you have the ability to separate out the various functions and components. Presentation can be coded for multiple platforms like smart phones, tablets and PC’s, while the computation can be a single entity shared between them. This makes the applications more resilient and more object-oriented, and lends itself to a SOA or Distributed Computing architecture. It is true that you could code up a similar set of functionality in a traditional web-farm, but the difference here is that the components are built into the very design of the architecture. The API’s and DLL’s you call in a Windows Azure code base contains components as first-class citizens. For instance, if you need storage, it is simply called within the application as an object.  Computation has multiple options and the ability to scale linearly. You also gain another component that you would either have to write or bolt-in to a typical web-farm: the Application Fabric. This Windows Azure component provides communication between applications or even to on-premise systems. It provides authorization in either person-based or claims-based perspectives. SQL Azure provides relational storage as another option, and can also be used or accessed from on-premise systems. It should be noted that you can use all or some of these components individually. Resources: Design Strategies for Scalable Active Server Applications - http://msdn.microsoft.com/en-us/library/ms972349.aspx  Physical Tiers and Deployment  - http://msdn.microsoft.com/en-us/library/ee658120.aspx

    Read the article

  • World Record Batch Rate on Oracle JD Edwards Consolidated Workload with SPARC T4-2

    - by Brian
    Oracle produced a World Record batch throughput for single system results on Oracle's JD Edwards EnterpriseOne Day-in-the-Life benchmark using Oracle's SPARC T4-2 server running Oracle Solaris Containers and consolidating JD Edwards EnterpriseOne, Oracle WebLogic servers and the Oracle Database 11g Release 2. The workload includes both online and batch workload. The SPARC T4-2 server delivered a result of 8,000 online users while concurrently executing a mix of JD Edwards EnterpriseOne Long and Short batch processes at 95.5 UBEs/min (Universal Batch Engines per minute). In order to obtain this record benchmark result, the JD Edwards EnterpriseOne, Oracle WebLogic and Oracle Database 11g Release 2 servers were executed each in separate Oracle Solaris Containers which enabled optimal system resources distribution and performance together with scalable and manageable virtualization. One SPARC T4-2 server running Oracle Solaris Containers and consolidating JD Edwards EnterpriseOne, Oracle WebLogic servers and the Oracle Database 11g Release 2 utilized only 55% of the available CPU power. The Oracle DB server in a Shared Server configuration allows for optimized CPU resource utilization and significant memory savings on the SPARC T4-2 server without sacrificing performance. This configuration with SPARC T4-2 server has achieved 33% more Users/core, 47% more UBEs/min and 78% more Users/rack unit than the IBM Power 770 server. The SPARC T4-2 server with 2 processors ran the JD Edwards "Day-in-the-Life" benchmark and supported 8,000 concurrent online users while concurrently executing mixed batch workloads at 95.5 UBEs per minute. The IBM Power 770 server with twice as many processors supported only 12,000 concurrent online users while concurrently executing mixed batch workloads at only 65 UBEs per minute. This benchmark demonstrates more than 2x cost savings by consolidating the complete solution in a single SPARC T4-2 server compared to earlier published results of 10,000 users and 67 UBEs per minute on two SPARC T4-2 and SPARC T4-1. The Oracle DB server used mirrored (RAID 1) volumes for the database providing high availability for the data without impacting performance. Performance Landscape JD Edwards EnterpriseOne Day in the Life (DIL) Benchmark Consolidated Online with Batch Workload System Rack Units BatchRate(UBEs/m) Online Users Users /Units Users /Core Version SPARC T4-2 (2 x SPARC T4, 2.85 GHz) 3 95.5 8,000 2,667 500 9.0.2 IBM Power 770 (4 x POWER7, 3.3 GHz, 32 cores) 8 65 12,000 1,500 375 9.0.2 Batch Rate (UBEs/m) — Batch transaction rate in UBEs per minute Configuration Summary Hardware Configuration: 1 x SPARC T4-2 server with 2 x SPARC T4 processors, 2.85 GHz 256 GB memory 4 x 300 GB 10K RPM SAS internal disk 2 x 300 GB internal SSD 2 x Sun Storage F5100 Flash Arrays Software Configuration: Oracle Solaris 10 Oracle Solaris Containers JD Edwards EnterpriseOne 9.0.2 JD Edwards EnterpriseOne Tools (8.98.4.2) Oracle WebLogic Server 11g (10.3.4) Oracle HTTP Server 11g Oracle Database 11g Release 2 (11.2.0.1) Benchmark Description JD Edwards EnterpriseOne is an integrated applications suite of Enterprise Resource Planning (ERP) software. Oracle offers 70 JD Edwards EnterpriseOne application modules to support a diverse set of business operations. Oracle's Day in the Life (DIL) kit is a suite of scripts that exercises most common transactions of JD Edwards EnterpriseOne applications, including business processes such as payroll, sales order, purchase order, work order, and manufacturing processes, such as ship confirmation. These are labeled by industry acronyms such as SCM, CRM, HCM, SRM and FMS. The kit's scripts execute transactions typical of a mid-sized manufacturing company. The workload consists of online transactions and the UBE – Universal Business Engine workload of 61 short and 4 long UBEs. LoadRunner runs the DIL workload, collects the user’s transactions response times and reports the key metric of Combined Weighted Average Transaction Response time. The UBE processes workload runs from the JD Enterprise Application server. Oracle's UBE processes come as three flavors: Short UBEs < 1 minute engage in Business Report and Summary Analysis, Mid UBEs > 1 minute create a large report of Account, Balance, and Full Address, Long UBEs > 2 minutes simulate Payroll, Sales Order, night only jobs. The UBE workload generates large numbers of PDF files reports and log files. The UBE Queues are categorized as the QBATCHD, a single threaded queue for large and medium UBEs, and the QPROCESS queue for short UBEs run concurrently. Oracle's UBE process performance metric is Number of Maximum Concurrent UBE processes at transaction rate, UBEs/minute. Key Points and Best Practices Two JD Edwards EnterpriseOne Application Servers, two Oracle WebLogic Servers 11g Release 1 coupled with two Oracle Web Tier HTTP server instances and one Oracle Database 11g Release 2 database on a single SPARC T4-2 server were hosted in separate Oracle Solaris Containers bound to four processor sets to demonstrate consolidation of multiple applications, web servers and the database with best resource utilizations. Interrupt fencing was configured on all Oracle Solaris Containers to channel the interrupts to processors other than the processor sets used for the JD Edwards Application server, Oracle WebLogic servers and the database server. A Oracle WebLogic vertical cluster was configured on each WebServer Container with twelve managed instances each to load balance users' requests and to provide the infrastructure that enables scaling to high number of users with ease of deployment and high availability. The database log writer was run in the real time RT class and bound to a processor set. The database redo logs were configured on the raw disk partitions. The Oracle Solaris Container running the Enterprise Application server completed 61 Short UBEs, 4 Long UBEs concurrently as the mixed size batch workload. The mixed size UBEs ran concurrently from the Enterprise Application server with the 8,000 online users driven by the LoadRunner. See Also SPARC T4-2 Server oracle.com OTN JD Edwards EnterpriseOne oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Oracle Fusion Middleware oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 09/30/2012.

    Read the article

  • Is there a low carbon future for the retail industry?

    - by user801960
    Recently Oracle published a report in conjunction with The Future Laboratory and a global panel of experts to highlight the issue of energy use in modern industry and the serious need to reduce carbon emissions radically by 2050.  Emissions must be cut by 80-95% below the levels in 1990 – but what can the retail industry do to keep up with this? There are three key aspects to the retail industry where carbon emissions can be cut:  manufacturing, transport and IT.  Manufacturing Naturally, manufacturing is going to be a big area where businesses across all industries will be forced to make considerable savings in carbon emissions as well as other forms of pollution.  Many retailers of all sizes will use third party factories and will have little control over specific environmental impacts from the factory, but retailers can reduce environmental impact at the factories by managing orders more efficiently – better planning for stock requirements means economies of scale both in terms of finance and the environment. The John Lewis Partnership has made detailed commitments to reducing manufacturing and packaging waste on both its own-brand products and products it sources from third party suppliers. It aims to divert 95 percent of its operational waste from landfill by 2013, which is a huge logistics challenge.  The John Lewis Partnership’s website provides a large amount of information on its responsibilities towards the environment. Transport Similarly to manufacturing, tightening up on logistical planning for stock distribution will make savings on carbon emissions from haulage.  More accurate supply and demand analysis will mean less stock re-allocation after initial distribution, and better warehouse management will mean more efficient stock distribution.  UK grocery retailer Morrisons has introduced double-decked trailers to its haulage fleet and adjusted distribution logistics accordingly to reduce the number of kilometers travelled by the fleet.  Morrisons measures route planning efficiency in terms of cases moved per kilometre and has, over the last two years, increased the number of cases per kilometre by 12.7%.  See Morrisons Corporate Responsibility report for more information. IT IT infrastructure is often initially overlooked by businesses when considering environmental efficiency.  Datacentres and web servers often need to run 24/7 to handle both consumer orders and internal logistics, and this both requires a lot of energy and puts out a lot of heat.  Many businesses are lowering environmental impact by reducing IT system fragmentation in their offices, while an increasing number of businesses are outsourcing their datacenters to cloud-based services.  Using centralised datacenters reduces the power usage at smaller offices, while using cloud based services means the datacenters can be based in a more environmentally friendly location.  For example, Facebook is opening a massive datacentre in Sweden – close to the Arctic Circle – to reduce the need for artificial cooling methods.  In addition, moving to a cloud-based solution makes IT services more easily scaleable, reducing redundant IT systems that would still use energy.  In store, the UK’s Carbon Trust reports that on average, lighting accounts for 25% of a retailer’s electricity costs, and for grocery retailers, up to 50% of their electricity bill comes from refrigeration units.  On a smaller scale, retailers can invest in greener technologies in store and in their offices.  The report concludes that widely shared objectives of energy security, reduced emissions and continued economic growth are dependent on the development of a smart grid capable of delivering energy efficiency and demand response, as well as integrating renewable and variable sources of energy. The report is available to download from http://emeapressoffice.oracle.com/imagelibrary/detail.aspx?MediaDetailsID=1766I’d be interested to hear your thoughts on the report.   

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you’ll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you’ve read my previous blog posts, you’ll be aware that I’ve been focusing on the database continuous integration theme. In my CI setup I create a “production”-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it’s not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn’t I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn’t an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley’s “Continuous Delivery” teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you’ve been allotted. 2. It’s not just about the storage requirements, it’s also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I’m just not going to get the feedback quickly enough to react. So what’s the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I’m sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server’s point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no ‘duplicate’ storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly “release test” process triggered by my CI tool. RESTORE DATABASE WidgetProduction_Virtual FROM DISK=N'D:\VirtualDatabase\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE WidgetProduction_Virtual WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the ‘virtual’ restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • Virtualbox: host only networking - proxy internet connection

    - by Russell
    I'll ask my question first, then give details about where I am coming from: Is it possible to use host only, then have ubuntu act as a proxy to provide internet access to windows? If so, how? I am trying to get the right combination of networking for my virtualbox windows client VM (win7). My host is ubuntu 10.10 (maverick). I believe I understand the basic network options (please correct me if I am incorrect): NAT - Host can't communicate with guest but guest has access to all host's adapters Host only - Separate adapter but guest has no net access Bridged - bridge an adapter in the host with the virtual adapter to give the host access to the host adapter I am trying to give my win guest internet access, but also access the host in a separate network. Bridged only works when the host is connected to the internet (this is a laptop) so when it's not connected the network is down. Thanks I appreciate your help.

    Read the article

  • Tuesday at OpenWorld: Identity Management

    - by Tanu Sood
    At Oracle OpenWorld? From keynotes, general sessions to product deep dives and executive events, this Tuesday is full of informational, educational and networking opportunities for you. Here’s a quick run-down of what’s happening today: Tuesday, October 2, 2012 KEYNOTE: The Oracle Cloud: Oracle’s Cloud Platform and Applications Strategy 8:00 a.m. – 9:45 a.m., Moscone North, Hall D Leading customers will join Oracle Executive Vice President Thomas Kurian to discuss how Oracle’s innovative cloud solutions are transforming how they manage their business, excite and retain their employees, and deliver great customer experiences through Oracle Cloud. GENERAL SESSION: Oracle Fusion Middleware Strategies Driving Business Innovation 10:15 a.m. – 11:15 a.m., Moscone North - Hall D Join Hasan Rizvi, Executive Vice President of Product in this strategy and roadmap session to hear how developers leverage new innovations in their applications and customers achieve their business innovation goals with Oracle Fusion Middleware. CON9437: Mobile Access Management 10:15 a.m. – 11:15 a.m., Moscone West 3022 The session will feature Identity Management evangelists from companies like Intuit, NetApp and Toyota to discuss how to extend your existing identity management infrastructure and policies to securely and seamlessly enable mobile user access. CON9162: Oracle Fusion Middleware: Meet This Year's Most Impressive Customer Projects 11:45 a.m. – 12:45 a.m., Moscone West, 3001 Hear from the winners of the 2012 Oracle Fusion Middleware Innovation Awards and see which customers are taking home a trophy for the 2012 Oracle Fusion Middleware Innovation Award.  Read more about the Innovation Awards here. CON9491: Enhancing the End-User Experience with Oracle Identity Governance applications 11:45 a.m. – 12:45 p.m., Moscone West 3008 Join experts from Visa and Oracle as they explore how Oracle Identity Governance solutions deliver complete identity administration and governance solutions with support for emerging requirements like cloud identities and mobile devices. CON9447: Enabling Access for Hundreds of Millions of Users 1:15 p.m. – 2:15 p.m., Moscone West 3008 Dealing with scale problems? Looking to address identity management requirements with million or so users in mind? Then take note of Cisco’s implementation. Join this session to hear first-hand how Cisco tackled identity management and scaled their implementation to bolster security and enforce compliance. CON9465: Next Generation Directory – Oracle Unified Directory 5:00 p.m. – 6:00 p.m., Moscone West 3008 Get the 360 degrees perspective from a solution provider, implementation services partner and the customer in this session to learn how the latest Oracle Unified Directory solutions can help you build a directory infrastructure that is optimized to support cloud, mobile and social networking and yet deliver on scale and performance. EVENTS: Executive Edge @ OpenWorld: Chief Security Officer (CSO) Summit 10:00 a.m. – 3:00 p.m. If you are attending the Executive Edge at Open World, be sure to check out the sessions at the Chief Security Officer Summit. Former Sr. Counsel for the National Security Agency, Joel Brenner, will be speaking about his new book "America the Vulnerable". In addition, PWC will present a panel discussion on "Crisis Management to Business Advantage: Security Leadership". See below for the complete agenda. PRODUCT DEMOS: And don’t forget to see Oracle identity Management solutions in action at Oracle OpenWorld DEMOgrounds. DEMOS LOCATION EXHIBITION HALL HOURS Access Management: Complete and Scalable Access Management Moscone South, Right - S-218 Monday, October 1 9:30 a.m.–6:00 p.m. 9:30 a.m.–10:45 a.m. (Dedicated Hours) Tuesday, October 2 9:45 a.m.–6:00 p.m. 2:15 p.m.–2:45 p.m. (Dedicated Hours) Wednesday, October 3 9:45 a.m.–4:00 p.m. 2:15 p.m.–3:30 p.m. (Dedicated Hours) Access Management: Federating and Leveraging Social Identities Moscone South, Right - S-220 Access Management: Mobile Access Management Moscone South, Right - S-219 Access Management: Real-Time Authorizations Moscone South, Right - S-217 Access Management: Secure SOA and Web Services Security Moscone South, Right - S-223 Identity Governance: Modern Administration and Tooling Moscone South, Right - S-210 Identity Management Monitoring with Oracle Enterprise Manager Moscone South, Right - S-212 Oracle Directory Services Plus: Performant, Cloud-Ready Moscone South, Right - S-222 Oracle Identity Management: Closed-Loop Access Certification Moscone South, Right - S-221 For a complete listing, keep the Focus on Identity Management document handy. And don’t forget to converse with us while at OpenWorld @oracleidm. We look forward to hearing from you.

    Read the article

  • Autoscaling in a modern world&hellip;. last chapter

    - by Steve Loethen
    As we all know as coders, things like logging are never important.  Our code will work right the first time.  So, you can understand my surprise when the first time I deployed the autoscaling worker role to the actual Azure fabric, it did not scale.  I mean, it worked on my machine.  How dare the datacenter argue with that.  So, how did I track down the problem?  (turns out, it was not so much code as lack of the right certificate)  When I ran it local in the developer fabric, I was able to see a wealth of information.  Lots of periodic status info every time the autoscalar came around to check on my rules and decide to act or not.  But that information was not making it to Azure storage.  The diagnostics were not being transferred to where I could easily see and use them to track down why things were not being cooperative.  After a bit of digging, I discover the problem.  You need to add a bit of extra configuration code to get the correct information stored for you.  I added the following to my app.config: Code Snippet <system.diagnostics>     <sources>         <source name="Autoscaling General"switchName="SourceSwitch"           switchType="System.Diagnostics.SourceSwitch" >         <listeners>           <add name="AzureDiag" />             <remove name="Default"/>         </listeners>       </source>         <source name="Autoscaling Updates"switchName="SourceSwitch"           switchType="System.Diagnostics.SourceSwitch" >         <listeners>           <add name="AzureDiag" />             <remove name="Default"/>         </listeners>       </source>     </sources>     <switches>       <add name="SourceSwitch"           value="Verbose, Information, Warning, Error, Critical" />     </switches>     <sharedListeners>       <add type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener,Microsoft.WindowsAzure.Diagnostics, Version=1.0.0.0, Culture=neutral,PublicKeyToken=31bf3856ad364e35" name="AzureDiag"/>     </sharedListeners>     <trace>       <listeners>         <add             type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener,Microsoft.WindowsAzure.Diagnostics, Version=1.0.0.0, Culture=neutral,PublicKeyToken=31bf3856ad364e35" name="AzureDiagnostics">           <filter type="" />         </add>       </listeners>     </trace>   </system.diagnostics> Suddenly all the rich tracing info I needed was filling up my storage account.  After a few cycles of trying to attempting to scale, I identified the cert problem, uploaded a correct certificate, and away it went.  I hope this was helpful.

    Read the article

  • Oracle Products Reflect Key Trends Shaping Enterprise 2.0

    - by kellsey.ruppel(at)oracle.com
    Following up on his predictions for 2011, we asked Enterprise 2.0 veteran Andy MacMillan to map out the ways Oracle solutions are at the forefront of industry trends--and how Oracle customers can benefit in the coming year. 1. Increase organizational awareness | Oracle WebCenter Suite Oracle WebCenter Suite provides a unique set of capabilities to drive organizational awareness. In particular, the expansive activity graph connects users directly to key enterprise applications, activities, and interests. In this way, applicable and critical business information is automatically and immediately visible--in the context of key tasks--via real-time dashboards and comprehensive reporting. Oracle WebCenter Suite also integrates key E2.0 services, such as blogs, wikis, and RSS feeds, into critical business processes, including back-office systems of records such as ERP and CRM systems. 2. Drive online customer engagement | Oracle Real-Time Decisions With more and more business being conducted on the Web, driving increased online customer engagement becomes a critical key to success. This effort is usually spearheaded by an increasingly important executive role, the Head of Online, who usually reports directly to the CMO. To help manage the Web experience online, Oracle solutions are driving a new kind of intelligent social commerce by combining Oracle Universal Content Management, Oracle WebCenter Services, and Oracle Real-Time Decisions with leading e-commerce and product recommendations. Oracle Real-Time Decisions provides multichannel recommendations for content, products, and services--including seamless integration across Web, mobile, and social channels. The result: happier customers, increased customer acquisition and retention, and improved critical success metrics such as shopping cart abandonment. 3. Easily build composite applications | Oracle Application Development Framework Thanks to the shared user experience strategy across Oracle Fusion Middleware, Oracle Fusion Applications and many other Oracle Applications, customers can easily create real, customer-specific composite applications using Oracle WebCenter Suite and Oracle Application Development Framework. Oracle Application Development Framework components provide modular user interface components that can build rich, social composite applications. In addition, a broad set of components spanning BPM, SOA, ECM, and beyond can be quickly and easily incorporated into composite applications. 4. Integrate records management into a global content platform | Oracle Enterprise Content Management 11g Oracle Enterprise Content Management 11g provides leading records management capabilities as part of a unified ECM platform for managing records, documents, Web content, digital assets, enterprise imaging, and application imaging. This unique strategy provides comprehensive records management in a consistent, cost-effective way, and enables organizations to consolidate ECM repositories and connect ECM to critical business applications. 5. Achieve ECM at extreme scale | Oracle WebLogic Server and Oracle Exadata To support the high-performance demands of a unified and rationalized content platform, Oracle has pioneered highly scalable and high-performing ECM infrastructures. Two innovations in particular helped make this happen. The core ECM platform itself moved to an Enterprise Java architecture, so organizations can now use Oracle WebLogic Server for enhanced scalability and manageability. Oracle Enterprise Content Management 11g can leverage Oracle Exadata for extreme performance and scale. Likewise, Oracle Exalogic--Oracle's foundation for cloud computing--enables extreme performance for processor-intensive capabilities such as content conversion or dynamic Web page delivery. Learn more about Oracle's Enterprise 2.0 solutions.

    Read the article

  • Is Linear Tape File System (LTFS) Best For Transportable Storage?

    - by rickramsey
    Those of us in tape storage engineering take a lot of pride in what we do, but understand that tape is the right answer to a storage problem only some of the time. And, unfortunately for a storage medium with such a long history, it has built up a few preconceived notions that are no longer valid. When I hear customers debate whether to implement tape vs. disk, one of the common strikes against tape is its perceived lack of usability. If you could go back a few generations of corporate acquisitions, you would discover that StorageTek engineers recognized this problem and started developing a solution where a tape drive could look just like a memory stick to a user. The goal was to not have to care about where files were on the cartridge, but to simply see the list of files that were on the tape, and click on them to open them up. Eventually, our friends in tape over at IBM built upon our work at StorageTek and Sun Microsystems and released the Linear Tape File System (LTFS) feature for the current LTO5 generation of tape drives as an open specification. LTFS is really a wonderful feature and we’re proud to have taken part in its beginnings and, as you’ll soon read, its future. Today we offer LTFS-Open Edition, which is free for you to use in your in Oracle Enterprise Linux 5.5 environment - not only on your LTO5 drives, but also on your Oracle StorageTek T10000C drives. You can download it free from Oracle and try it out. LTFS does exactly what its forefathers imagined. Now you can see immediately which files are on a cartridge. LTFS does this by splitting a cartridge into two partitions. The first holds all of the necessary metadata to create a directory structure for you to easily view the contents of the cartridge. The second partition holds all of the files themselves. When tape media is loaded onto a drive, a complete file system image is presented to the user. Adding files to a cartridge can be as simple as a drag-and-drop just as you do today on your laptop when transferring files from your hard drive to a thumb drive or with standard POSIX file operations. You may be thinking all of this sounds nice, but asking, “when will I actually use it?” As I mentioned at the beginning, tape is not the right solution all of the time. However, if you ever need to physically move data between locations, tape storage with LTFS should be your most cost-effective and reliable answer. I will give you a few use cases examples of when LTFS can be utilized. Media and Entertainment (M&E), Oil and Gas (O&G), and other industries have a strong need for their storage to be transportable. For example, an O&G company hunting for new oil deposits in remote locations takes very large underground seismic images which need to be shipped back to a central data center. M&E operations conduct similar activities when shooting video for productions. M&E companies also often transfers files to third-parties for editing and other activities. These companies have three highly flawed options for transporting data: electronic transfer, disk storage transport, or tape storage transport. The first option, electronic transfer, is impractical because of the expense of the bandwidth required to transfer multi-terabyte files reliably and efficiently. If there’s one place that has bandwidth, it’s your local post office so many companies revert to physically shipping storage media. Typically, M&E companies rely on transporting disk storage between sites even though it, too, is expensive. Tape storage should be the preferred format because as IDC points out, “Tape is more suitable for physical transportation of large amounts of data as it is less vulnerable to mechanical damage during transportation compared with disk" (See note 1, below). However, tape storage has not been used in the past because of the restrictions created by proprietary formats. A tape may only be readable if both the sender and receiver have the same proprietary application used to write the file. In addition, the workflows may be slowed by the need to read the entire tape cartridge during recall. LTFS solves both of these problems, clearing the way for tape to become the standard platform for transferring large files. LTFS is open and, as long as you’ve downloaded the free reader from our website or that of anyone in the LTO consortium, you can read the data. So if a movie studio ships a scene to a third-party partner to add, for example, sounds effects or a music score, it doesn’t have to care what technology the third-party has. If it’s written back to an LTFS-formatted tape cartridge, it can be read. Some tape vendors like to claim LTFS is a “standard,” but beauty is in the eye of the beholder. It’s a specification at this point, not a standard. That said, we’re already seeing application vendors create functionality to write in an LTFS format based on the specification. And it’s my belief that both customers and the tape storage industry will see the most benefit if we all follow the same path. As such, we have volunteered to lead the way in making LTFS a standard first with the Storage Network Industry Association (SNIA), and eventually through to standard bodies such as American National Standards Institute (ANSI). Expect to hear good news soon about our efforts. So, if storage transportability is one of your requirements, I recommend giving LTFS a look. It makes tape much more user-friendly and it’s free, which allows tape to maintain all of its cost advantages over disk! Note 1 - IDC Report. April, 2011. “IDC’s Archival Storage Solutions Taxonomy, 2011” - Brian Zents Website Newsletter Facebook Twitter

    Read the article

< Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >