Search Results

Search found 22526 results on 902 pages for 'multiple databases'.

Page 331/902 | < Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >

  • How to Answer a Stupid Interview Question the Right Way

    - by AjarnMark
    Have you ever been asked a stupid question during an interview; one that seemed to have no relation to the job responsibilities at all?  Tech people are often caught off-guard by these apparently irrelevant questions, but there is a way you can turn these to your favor.  Here is one idea. While chatting with a couple of folks between sessions at SQLSaturday 43 last weekend, one of them expressed frustration over a seemingly ridiculous and trivial question that she was asked during an interview, and she believes it cost her the job opportunity.  The question, as I remember it being described was, “What is the largest byte measurement?”.  The candidate made up a guess (“zetabyte”) during the interview, which is actually closer than she may have realized.  According to Wikipedia, there is a measurement known as zettabyte which is 10^21, and the largest one listed there is yottabyte at 10^24. My first reaction to this question was, “That’s just a hiring manager that doesn’t really know what they’re looking for in a candidate.  Furthermore, this tells me that this manager really does not understand how to build a team.”  In most companies, team interaction is more important than uber-knowledge.  I didn’t ask, but this could also be another geek on the team trying to establish their Alpha-Geek stature.  I suppose that there are a few, very few, companies that can build their businesses on hiring only the extreme alpha-geeks, but that certainly does not represent the majority of businesses in America. My friend who was there suggested that the appropriate response to this silly question would be, “And how does this apply to the work I will be doing?” Of course this is an understandable response when you’re frustrated because you know you can handle the technical aspects of the job, and it seems like the interviewer is just being silly.  But it is also a direct challenge, which may not be the best approach in interviewing.  I do have to admit, though, that there are those folks who just won’t respect you until you do challenge them, but again, I don’t think that is the majority. So after some thought, here is my suggestion: “Well, I know that there are petabytes and exabytes and things even larger than that, but I haven’t been keeping up on my list of Greek prefixes that have not yet been used, so I would have to look up the exact answer if you need it.  However, I have worked with databases as large as 30 Terabytes.  How big are the largest databases here at X Corporation?”  Perhaps with a follow-up of, “Typically, what I have seen in companies that have databases of your size, is that the three biggest challenges they face are: A, B, and C.  What would you say are the top 3 concerns that you would like the person you hire to be able to address?…Here is how I have dealt with those concerns in the past (or ‘Here is how I would tackle those issues for you…’).” Wait! What just happened?!  We took a seemingly irrelevant and frustrating question and turned it around into an opportunity to highlight our relevant skills and guide the conversation back in a direction more to our liking and benefit.  In more generic terms, here is what we did: Admit that you don’t know the specific answer off the top of your head, but can get it if it’s truly important to the company. Maybe for some reason it really is important to them. Mention something similar or related that you do know, reassuring them that you do have some knowledge in that subject area. Draw a parallel to your past work experience. Ask follow-up questions about the company’s specific needs and discuss how you can fulfill those. This type of thing requires practice and some forethought.  I didn’t come up with this answer until a day later, which is too late when you’re interviewing.  I still think it is silly for an interviewer to ask something like that, but at least this is one way to spin it to your advantage while you consider whether you really want to work for someone who would ask a thing like that.  Remember, interviewing is a two-way process.  You’re deciding whether you want to work there just as much as they are deciding whether they want you. There is always the possibility that this was a calculated maneuver on the part of the hiring manager just to see how quickly you think on your feet and how you handle stupid questions.  Maybe he knows something about the work environment and he’s trying to gauge whether you’ll actually fit in okay.  And if that’s the case, then the above response still works quite well.

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • Problem with stackless python, cannot write to a dict

    - by ANON
    I have simple map-reduce type algorithm, which I want to implement in python and make use of multiple cores. I read somewhere that threads using native thread module in 2.6 dont make use of multiple cores. is that true? I even implemented it using stackless python however i am getting into weird errors [Update: a quick search showed that the stack less does not allows multiple cores So are their any other alternatives?] def Propagate(start,end): print "running Thread with range: ",start,end def maxVote(nLabels): count = {} maxList = [] maxCount = 0 for nLabel in nLabels: if nLabel in count: count[nLabel] += 1 else: count[nLabel] = 1 #Check if the count is max if count[nLabel] > maxCount: maxCount = count[nLabel]; maxList = [nLabel,] elif count[nLabel]==maxCount: maxList.append(nLabel) return random.choice(maxList) for num in range(start,end): node=MapList[num] nLabels = [Label[k] for k in Adj[node]] if (nLabels!=[]): Label[node] = maxVote(nLabels) else: Label[node]=node However in above code the values assigned to Label, that is the change in dictionary are lost. Above propagate function is used as callable for MicroThreads (i.e. TaskLets)

    Read the article

  • Interview with Geoff Bones, developer on SQL Storage Compress

    - by red(at)work
    How did you come to be working at Red Gate? I've been working at Red Gate for nine months; before that I had been at a multinational engineering company. A number of my colleagues had left to work at Red Gate and spoke very highly of it, but I was happy in my role and thought, 'It can't be that great there, surely? They'll be back!' Then one day I visited to catch up them over lunch in the Red Gate canteen. I was so impressed with what I found there, that, three days later, I'd applied for a role as a developer. And how did you get into software development? My first job out of university was working as a systems programmer on IBM mainframes. This was quite a while ago: there was a lot of assembler and loading programs from tape drives and that kind of stuff. I learned a lot about how computers work, and this stood me in good stead when I moved over the development in the 90s. What's the best thing about working as a developer at Red Gate? Where should I start? One of the great things as a developer at Red Gate is the useful feedback and close contact we have with the people who use our products, either directly at trade shows and other events or through information coming through the product managers. The company's whole ethos is built around assisting the user, and this is in big contrast to my previous development roles. We aim to produce tools that people really want to use, that they enjoy using, and, as a developer, this is a great thing to aim for and a great feeling when we get it right. At Red Gate we also try to cut out the things that distract and stop us doing our jobs. As a developer, this means that I can focus on the code and the product I'm working on, knowing that others are doing a first-class job of making sure that the builds are running smoothly and that I'm getting great feedback from the testers. We keep our process light and effective, as we want to produce great software more than we want to produce great audit trails. Tell us a bit about the products you are currently working on. You mean HyperBac? First let me explain a bit about what HyperBac is. At heart it's a compression and encryption technology, but with a few added features that open up a wealth of really exciting possibilities. Right now we have the HyperBac technology in just three products: SQL HyperBac, SQL Virtual Restore and SQL Storage Compress, but we're only starting to develop what it can do. My personal favourite is SQL Virtual Restore; for example, I love the way you can use it to run independent test databases that are all backed by a single compressed backup. I don't think the market yet realises the kind of things you do once you are using these products. On the other hand, the benefits of SQL Storage Compress are straightforward: run your databases but use only 20% of the disk space. Databases are getting larger and larger, and, as they do, so does your ROI. What's a typical day for you? My days are pretty varied. We have our daily team stand-up meeting and then sometimes I will work alone on a current issue, or I'll be pair programming with one of my colleagues. From time to time we give half a day up to future planning with the team, when we look at the long and short term aims for the product and working out the development priorities. I also get to go to conferences and events, which is unusual for a development role and gives me the chance to meet and talk to our customers directly. Have you noticed anything different about developing tools for DBAs rather than other IT kinds of user? It seems to me that DBAs are quite independent minded; they know exactly what the problem they are facing is, and often have a solution in mind before they begin to look for what's on the market. This means that they're likely to cherry-pick tools from a range of vendors, picking the ones that are the best fit for them and that disrupt their environments the least. When I've met with DBAs, I've often been very impressed at their ability to summarise their set up, the issues, the obstacles they face when implementing a tool and their plans for their environment. It's easier to develop products for this audience as they give such a detailed overview of their needs, and I feel I understand their problems.

    Read the article

  • Managing Team Development with SSAS, TFS, & BIDS

    - by Kevin D. White
    I am currently a single BI developer over a corporate datawarehouse and cube. I use SQL Server 2008, SSAS, and SSIS as my basic toolkit. I use Visual Studio +BIDS and TFS for my IDE and source control. I am about to take on multiple projects with an offshore vendor and I am worried about managing change. My major concern is manging merges and changes between me and the offshore team. Merging and managing changes to SQL & XML for just one person is bad enough but with multiple developers it seems like a nightmare. Any thoughts on how best to structure development knowing that sometimes there is no way to avoid multiple individuals making changes to the same file?

    Read the article

  • SSIS(sql server integration service) xml data flow

    - by swapna
    Hi, I have an xml file the content which i have to write to a Database table using ssis pacakge. I am using xml source nad oledb destination My issue now is this xml file generate multiple outputs .(event,produt,offer,form) etc. But i need to write all in one data row(more than one if 2 products are there for the event) in the database. But i do not know how to use this multiple outputs and make a single row for a event. I hav read numerous articles about this subject but not able to take a decision.what is the right way of doing this. 1) xml source ? (if i use this how do i merge the multiple outputs) 2) or a script task using xml objects read and write to the DB. or anything new ? Please provide me some solutions xml sample file * - ABc. 2009-06-07 2010-04-30 region test 1 contact - offertest product1 product1 187 * Thanks SNA

    Read the article

  • Fast multi-window rendering with C#

    - by seb
    I've been searching and testing different kind of rendering libraries for C# days for many weeks now. So far I haven't found a single library that works well on multi-windowed rendering setups. The requirement is to be able to run the program on 12+ monitor setups (financial charting) without latencies on a fast computer. Each window needs to update multiple times every second. While doing this CPU needs to do lots of intensive and time critical tasks so some of the burden has to be shifted to GPUs. That's where hardware rendering steps in, in another words DirectX or OpenGL. I have tried GDI+ with windows forms and figured it's way too slow for my needs. I have tried OpenGL via OpenTK (on windows forms control) which seemed decently quick (I still have some tests to run on it) but painfully difficult to get working properly (hard to find/program good text rendering libraries). Recently I tried DirectX9, DirectX10 and Direct2D with Windows forms via SharpDX. I tried a separate device for each window and a single device/multiple swap chains approaches. All of these resulted in very poor performance on multiple windows. For example if I set target FPS to 20 and open 4 full screen windows on different monitors the whole operating system starts lagging very badly. Rendering is simply clearing the screen to black, no primitives rendered. CPU usage on this test was about 0% and GPU usage about 10%, I don't understand what is the bottleneck here? My development computer is very fast, i7 2700k, AMD HD7900, 16GB ram so the tests should definitely run on this one. In comparison I did some DirectX9 tests on C++/Win32 API one device/multiple swap chains and I could open 100 windows spread all over the 4-monitor workspace (with 3d teapot rotating on them) and still had perfectly responsible operating system (fps was dropping of course on the rendering windows quite badly to around 5 which is what I would expect running 100 simultaneous renderings). Does anyone know any good ways to do multi-windowed rendering on C# or am I forced to re-write my program in C++ to get that performance (major pain)? I guess I'm giving OpenGL another shot before I go the C++ route... I'll report any findings here. Test methods for reference: For C# DirectX one-device multiple swapchain test I used the method from this excellent answer: Display Different images per monitor directX 10 Direct3D10 version: I created the d3d10device and DXGIFactory like this: D3DDev = new SharpDX.Direct3D10.Device(SharpDX.Direct3D10.DriverType.Hardware, SharpDX.Direct3D10.DeviceCreationFlags.None); DXGIFac = new SharpDX.DXGI.Factory(); Then initialized the rendering windows like this: var scd = new SwapChainDescription(); scd.BufferCount = 1; scd.ModeDescription = new ModeDescription(control.Width, control.Height, new Rational(60, 1), Format.R8G8B8A8_UNorm); scd.IsWindowed = true; scd.OutputHandle = control.Handle; scd.SampleDescription = new SampleDescription(1, 0); scd.SwapEffect = SwapEffect.Discard; scd.Usage = Usage.RenderTargetOutput; SC = new SwapChain(Parent.DXGIFac, Parent.D3DDev, scd); var backBuffer = Texture2D.FromSwapChain<Texture2D>(SC, 0); _rt = new RenderTargetView(Parent.D3DDev, backBuffer); Drawing command executed on each rendering iteration is simply: Parent.D3DDev.ClearRenderTargetView(_rt, new Color4(0, 0, 0, 0)); SC.Present(0, SharpDX.DXGI.PresentFlags.None); DirectX9 version is very similar: Device initialization: PresentParameters par = new PresentParameters(); par.PresentationInterval = PresentInterval.Immediate; par.Windowed = true; par.SwapEffect = SharpDX.Direct3D9.SwapEffect.Discard; par.PresentationInterval = PresentInterval.Immediate; par.AutoDepthStencilFormat = SharpDX.Direct3D9.Format.D16; par.EnableAutoDepthStencil = true; par.BackBufferFormat = SharpDX.Direct3D9.Format.X8R8G8B8; // firsthandle is the handle of first rendering window D3DDev = new SharpDX.Direct3D9.Device(new Direct3D(), 0, DeviceType.Hardware, firsthandle, CreateFlags.SoftwareVertexProcessing, par); Rendering window initialization: if (parent.D3DDev.SwapChainCount == 0) { SC = parent.D3DDev.GetSwapChain(0); } else { PresentParameters pp = new PresentParameters(); pp.Windowed = true; pp.SwapEffect = SharpDX.Direct3D9.SwapEffect.Discard; pp.BackBufferFormat = SharpDX.Direct3D9.Format.X8R8G8B8; pp.EnableAutoDepthStencil = true; pp.AutoDepthStencilFormat = SharpDX.Direct3D9.Format.D16; pp.PresentationInterval = PresentInterval.Immediate; SC = new SharpDX.Direct3D9.SwapChain(parent.D3DDev, pp); } Code for drawing loop: SharpDX.Direct3D9.Surface bb = SC.GetBackBuffer(0); Parent.D3DDev.SetRenderTarget(0, bb); Parent.D3DDev.Clear(ClearFlags.Target, Color.Black, 1f, 0); SC.Present(Present.None, new SharpDX.Rectangle(), new SharpDX.Rectangle(), HWND); bb.Dispose(); C++ DirectX9/Win32 API test with multiple swapchains and one device code is here: http://pastebin.com/tjnRvATJ It's a modified version from Kevin Harris's nice example code.

    Read the article

  • How to cancel drop event in YUI drag & drop utility?

    - by gk
    We are using drag & drop utility between one source and multiple targets. We have a restriction that one of the target can only have one child element while the other ones can have multiple items. I have tried subscribing dragDropEvent of the proxy item and returning false in case the destination target has multiple child elements, with out much luck. var m = new YAHOO.example.DDList("dli" + j, 'documentSelection'); m.subscribe('dragDropEvent', function(e){ if (e.info == 'ulMasterDocument' && $('#ulMasterDocument').children().length > 1){ e.event.canceBubble = true; return false; } return true; }); Is this code correct? Or do i need to subscribe some other event? Thanks

    Read the article

  • C# Login Dialog Implementation

    - by arc1880
    I have implemented a LoginAccess class that prompts the user to enter their active directory username and password. Then I save the login data as an encrypted file. Every subsequent start of the application, the LoginAccess class will read the encrypted file and check against the active directory to see if the login information is still valid. If it is not, then it will prompt the user again. I have made it so that the reading of the encrypted file and displaying of the login dialog is done on a separate thread. A delegate is fired when the login process is complete. The issue that I'm having is that I have a class that is used in multiple places. This class contains the call to the LoginAccess object. Every time I instantiate a new object there are multiple calls to the LoginAccess object and I get multiple dialogs appearing when it tries to prompt for a username and password. Any suggestions on how to have only one dialog appear would be greatly appreciated.

    Read the article

  • Emulating HTTP POST via httpclient 3.x for multi options

    - by Frankie Ribery
    I want to emulate a HTTP POST using application/x-www-form-urlencoded encoding to send a option group that allows multiple selections. <select name="groups" multiple="multiple" size="4"> <option value="2">Administration</option> <option value="1">General</option> </select> Does adding 2 NameValuePairs (NVP) with the same name work ? My serverside log shows that only the first NVP was received. e.g PostMethod method = ...; NameValuePair[] nvpairs = { new NameValuePair( "groups", "2" ); new NameValuePair( "groups", "1" ); }; method.addParameter( nvpairs ); Only the groups=1 parameter was received. Thanks

    Read the article

  • Are JSF 2.x ViewScoped Beans Thread Safe?

    - by Mark
    I've been googling for a couple hours on this issue to no eval. WELD docs and the CDI spec are pretty clear regarding thread safety of the scopes provided. For example: Application Scope - not safe Session Scope - not safe Request Scope - safe, always bound to a single thread Conversation Scope - safe (due to the WELD proxy serializing access from multiple request threads) I can't find anything on the View Scope defined by JSF 2.x. It is in roughly the same bucket as the Conversation Scope in that it is very possible for multiple requests to hit the scope concurrently despite it being bound to a single view / user. What I don't know is if the JSF implementation serializes access to the bean from multiple requests. Anyone have knowledge of the spec or of the Morraja/MyFaces implementations that could clear this up?

    Read the article

  • Silverlight Line Graph with Gradient

    - by gav
    I have a series of points which I will turn into a line on a graph. What I want is to give the area under the graph a gradient fill. It would look somewhat similar to a Bloomberg graph like this; My question really has three parts; First, how should I fill only the area under the graph? Second, how do I fill that with a gradient? Finally, if I have multiple lines on the same graph any area under more than one line should have a greyscale gradient fill, how would you set this up? My biggest problem is deciding on the data structures to use, I could use many multiple sided shapes (One for each line/ data series) and then tell the brush to draw; Transparent if it's not in any shape The colour of one series if it's in one shape (Alpha relative to height to give grad) Black if it's in multiple shapes (Alpha relative to height to give grad) Then I'd draw the shapes' boundaries in white afterwards. Thanks, Gav

    Read the article

  • Thread safety of Matlab engine API

    - by Jeremy
    I have discovered through trial and error that the MATLAB engine function is not completely thread safe. Does anyone know the rules? Discovered through trial and error: On Windows, the connection to MATLAB is via COM, so the COM Apartment threading rules apply. All calls must occur in the same thread, but multiple connections can occur in multiple threads as long as each connection is isolated. From the answers below, it seems that this is not the case on UNIX, where calls can be made from multiple threads as long as the calls are made serially.

    Read the article

  • How to code Fizzbuzz in F#

    - by Russell
    I am currently learning F# and have tried (an extremely) simple example of FizzBuzz. This is my initial attempt: for x in 1..100 do if x % 3 = 0 && x % 5 = 0 then printfn "FizzBuzz" elif x % 3 = 0 then printfn "Fizz" elif x % 5 = 0 then printfn "Buzz" else printfn "%d" x What solutions could be more elegant/simple/better (explaining why) using F# to solve this problem? Note: The FizzBuzz problem is going through the numbers 1 to 100 and every multiple of 3 prints Fizz, every multiple of 5 prints Buzz, every multiple of both 3 AND 5 prints FizzBuzz. Otherwise, simple the number is displayed. Thanks :)

    Read the article

  • setting up/installing/configuring nginx LEMP stack on fresh VPS server

    - by grant tailor
    I need some help in settingup/installing and configuring nginx LEMP stack on a fresh new VPS i have. The specs of the CentOS 5.7 VPS are 2GB DDR3 ECC RAM(4GB burst), 1 core 1.5Ghz(3Ghz burst) and 100GB RAID 10 storage, unmetered bandwidth @ 100Mpbs all for a whopping $25/month(unbeatable, yeah i know :) Anyways i have followed this LEMP(will also need MySQL and PHP) stack guide on linode http://library.linode.com/lemp-guides/centos-5 but basically what i want is to be able to host multiple website on this webserver after everything is setup. I am used to using DirectAdmin control panel on other server and want to have things setup so i can host multiple websites...mostly wordpress and drupal themes. Lets say 10 websites on this nginx web server. So can someone please help me on what i need to do to take "full" advantage of nginx power and performance, while been able to easily manage these multiple websites (wordpress and drupal themes)? Thanks.

    Read the article

  • Disaster Recovery Discovery

    - by Rodney Landrum
    Last weekend I joined several of my IT staff on a mission to perform a DR test in our remote CoLo center in a large South East city of the US. Can I be more obtuse? The goal was simple for me as the sole DBA in a throng of Windows, Storage, Network and SAN admins – restore the databases and make them work. There were 4 applications that back ended to 7 SQL Server databases on 4 different SQL Server instances. We would maintain the original server names, but beyond that it was fair game. We had time to prepare so I was able to script out or otherwise automate the recovery process. I used sp_help_revlogin for three of the servers, a bit of a cheat actually because restoring the Master database on the target DR servers was the specified course of action according to the DR procedures ( the caveat “IF REQUIRED” left it open to interpretation. I really wanted to avoid the step of restoring Master for a number of reasons but mainly because I did not want to deal with issues starting SQL Services afterward. Having to account for the location of TempDB and the version conflicts of the resource DBs were just two of the battles I chose not to fight. Not to mention other system database location problems that might arise and prevent SQL from starting.  I was going to have to restore all of the user databases anyway, so I would not really gain any benefit, outside of logins, for taking the time to restore the source Master database over the newly installed one on the fresh server. What I wanted was the ability to restore the Master database as a user database, call it Master_Mine, from a backup on the source system and then use that restored database to script the SQL Logins and passwords on the DR systems. While I did not attempt this on the trip, the thought stuck in my mind and this past week I succeeded at scripting user accounts and passwords using only a restored copy of the Master database. Granted there were several challenges to overcome.  Also, as is usual for any work like this the usual disclaimers apply:  This is not something that I would imagine Microsoft would condone or support and this was really only an experiment for me to learn if it was even possible. While I have tested the process with success, I do not know that I would use this technique in a documented procedure because future updates for SQL Server will render this technique non-functional. I thought at first, incorrectly of course, that I could use sp_help_revlogin on a restored copy of the master database I named Master_Mine.   Since sp_help_revlogin uses system schema objects, sys.syslogins and sys.server_principals, this was not going to work because all results would come from the main Master database. To test this I added a SQL login via SSMS, backed up Master, restored  it as Master_Mine, and then deleted the login.  Even though the test account I created should presumably still be in the Master_Mine database, I should be able to get to it and script out its creation with its password hash so that I would not need to know the password, but any applications that stored that password would not have to be altered in the DR scenario. They would just work as expected. Once I realized that would not work I began looking deeper.  Knowing that sys.syslogins and sys.server_principals are system views, their underlying code should be available with sp_helptext, right? They were. And this led me to discover the two tables sys.sysxlgns and sys.sysprivs, where the data I needed was stored. These tables existed in both the real Master and the restored copy, Master_Mine.  I used this information to tweak the sp_help_revlogin stored procedure to use these tables instead to create the logins cursor used in sp_help_revlogin. For the password hash,  sp_help_revlogin uses the function LoginProperty() which takes a user name and option ‘passwordhash’ to return the hash for the user. Unfortunately, it requires the login to exist in the Master database. This would not work. So another slight modification I had to make was to pull the password hash itself (pwdhash from sys.sysxlgns) into the logins cursor and comment out the section of sp_help_revlogin that uses LoginProperty. Instead, I pass the pwdhash value as the variable @PWD_varbinary to the sp_hexadecimal stored procedure which is also created by and used within the code provided by Microsoft in the link above for sp_help_revlogin. The final challenge: sys.sysxlgns and sys.server_principals are visible only within a Dedicated Administrator Connection (DAC) query window in SSMS or within SQLCDMD.  To open a DAC connection you have to be logged in on the SQL Server itself, via RDP in my case,  and you preface the server name in the query connection with ADMIN:, so that the server connection looks like ADMIN:ServerName. From there you can create the modified stored procedure in the restored copy of a Master database from a source system as whatever name you like, and then run the modified stored procedure. I named my new stored procedure usp_help_revlogin_MyMaster. Upon execution I was happy to see the logins and password hashes that I needed to apply from the source Master database without having to restore over the new Master system database and without the need to access the original server (assuming it was down due to whatever disaster put it in that state). You will note that I am not providing full code samples here of the modifications. I will say that it was a slight bit of work and anyone who needed to do this for whatever reason, could fairly easily roll their own solution with the information provided herein.  My goal, as I said was to prove that this could be done and provide another option if required to ease the burden of getting SQL Servers up and available in an emergency situation where alternatives may be more challenging or otherwise unavailable.  

    Read the article

  • How to define a query om a n-m table

    - by user559889
    Hi, I have some troubles defining a query. I have a Product and a Category table. A product can belong to multiple categories and vice versa so there is also a Product-Category table. Now I want to select all products that belong to a certain category. But if the user does not provide a category I want all products. I try to create a query using a join but this results in the product being selected multiple times if it belongs to multiple categories (in the case no specific category is queried). What kind of query do I have to create? Thanks

    Read the article

  • Is there an existing solution to the multithreaded data structure problem?

    - by thr
    I've had the need for a multi-threaded data structure that supports these claims: Allows multiple concurrent readers and writers Is sorted Is easy to reason about Fulfilling multiple readers and one writer is a lot easier, but I really would wan't to allow multiple writers. I've been doing research into this area, and I'm aware of ConcurrentSkipList (by Lea based on work by Fraser and Harris) as it's implemented in Java SE 6. I've also implemented my own version of a concurrent Skip List based on A Provably Correct Scalable Concurrent Skip List by Herlihy, Lev, Luchangco and Shavit. These two implementations are developed by people that are light years smarter then me, but I still (somewhat ashamed, because it is amazing work) have to ask the question if these are the two only viable implementations of a concurrent multi reader/writer data structures available today?

    Read the article

  • Different return XML in a WCF Operation

    - by Sean Hederman
    I am writing a service to a international HTTP standard, and there is one method that can return three different XML results, call them Single, Multiple and Error. Now I've written an IXmlSerializable class that can consume each of these results and generate them. However, WCF seems to insist that I can only have a single return XML root name. I have to choose an XmlRoot for my custom object of either Single, Multiple or Error. How can I set up WCF so that I can choose at runtime what the root will be? This is what I have currently. /// <summary> /// A collection of items. /// </summary> [XmlRoot("Multiple", Namespace = "DAV:")] public sealed class ItemCollection : IEnumerable<Item>, IXmlSerializable /// <summary> /// Processes and returns the items. /// </summary> [WebInvoke(Method = "POST", UriTemplate = "{*path}", BodyStyle = WebMessageBodyStyle.Bare)] [OperationContract] [XmlSerializerFormat] ItemCollection Process(string path);

    Read the article

  • how to compile f# on mono

    - by leon
    I am trying to compile this example in mono on ubuntu. However I get the error wingsit@wingsit-laptop:~/MyFS/kitty$ fsc.exe -o kitty.exe kittyAst.fs kittyParser.fs kittyLexer.fs main.fs Microsoft (R) F# 2.0 Compiler build 2.0.0.0 Copyright (c) Microsoft Corporation. All Rights Reserved. /home/wingsit/MyFS/kitty/kittyAst.fs(1,1): error FS0222: Files in libraries or multiple-file applications must begin with a namespace or module declaration, e.g. 'namespace SomeNamespace.SubNamespace' or 'module SomeNamespace.SomeModule' /home/wingsit/MyFS/kitty/kittyParser.fs(2,1): error FS0222: Files in libraries or multiple-file applications must begin with a namespace or module declaration, e.g. 'namespace SomeNamespace.SubNamespace' or 'module SomeNamespace.SomeModule' /home/wingsit/MyFS/kitty/kittyLexer.fsl(2,1): error FS0222: Files in libraries or multiple-file applications must begin with a namespace or module declaration, e.g. 'namespace SomeNamespace.SubNamespace' or 'module SomeNamespace.SomeModule' wingsit@wingsit-laptop:~/MyFS/kitty$ I am a newbie in F#. Is there something obvious I miss?

    Read the article

  • Oracle global lock across process

    - by Jimm
    I would like to synchronize access to a particular insert. Hence, if multiple applications execute this "one" insert, the inserts should happen one at a time. The reason behind synchronization is that there should only be ONE instance of this entity. If multiple applications try to insert the same entity,only one should succeed and others should fail. One option considered was to create a composite unique key, that would uniquely identify the entity and rely on unique constraint. For some reasons, the dba department rejected this idea. Other option that came to my mind was to create a stored proc for the insert and if the stored proc can obtain a global lock, then multiple applications invoking the same stored proc, though in their seperate database sessions, it is expected that the stored proc can obtain a global lock and hence serialize the inserts. My question is it possible to for a stored proc in oracle version 10/11, to obtain such a lock and any pointers to documentation would be helpful.

    Read the article

  • Multiblog engine for asp.net

    - by Andrey
    I know, different forms of this questions were asked on this site multiple times, but I haven't seen a single answer that would satisfy my need. I need a ASP.NET based blogging engine that wouul use SQL Server as a back end and allow multiple independet blogs in one app instance. I'm writing a community website for major bank and blogging is the piece I'm not sure about. Answers to other questions include a broad spectrum from BlogEngine.NET (doesn't support multiple blogs) to CommunityServer (a beast! blogging is just asmall piece of it). I don't want to install a full-blown CRM and just use blogging, I want a blogging engine. I don't mind to buy a commercial one but I can't find one. I'm pretty much stuck, and any ideas are highly appreciated!

    Read the article

  • Data truncation when retrieving data from MySQL database with prepared statements

    - by KSiimson
    I have a script that retrieves multiple products using prepared statements. Like putting loops into loops, I have prepared statements in prepared statements - so there is a prepared statement for retrieving all products, a prepared statement to retrieve all images for that product, a prepared statement to get all attributes for that products, and so on. This does not work with one MySQLi instance, so I use multiple MySQLi objects that are opened and closed when needed. It usually works fine, but sometimes, especially when displaying multiple products, some data is truncated. For example - MicoLoans becomes MicoLoa. There was an actual spelling mistake here - now when I changed MicoLoans to MicroLoans, the same page displayed MicroLoa... So the same number of characters was truncated from the end. It is sort of consistent where it appears - for example there can be descriptions for 8 products, and description of 1 product is heavily truncated. When I add 9th product, the short description is still truncated for that same product as before. Any ideas?

    Read the article

  • What is a .NET managed module?

    - by Abhijeet Patel
    I know it's a Windows PE32, but I also know that the unit of deployment in .NET is an assembly which in turn has a manifest and can be made up of multiple managed modules. My questions are : 1) How would you create multiple managed modules when building a project such as a class lib or a console app etc. 2) Is there a way to specify this to the compiler(via the project properties for example) to partition your source code files into multiple managed modules. If so what is the benefit of doing so? 3)Can managed modules span assemblies? 4)Are separate file created on disk when the source code is compiled or are these created in memory and directly embedded in an assembly?

    Read the article

  • What is the best way to add categories to posts - Ruby on Rails blog...

    - by bgadoci
    I am new to Ruby and Rails so bear with me please. I have created a very simple blog application with both posts and comments. Everything works great. My next question regarding adding categories. I am wondering the best way to do this. As I can't see too far in front of me yet when it comes to Rails I thought I would ask. To be clear, I would like that a single post can have multiple categories and a category can have multiple posts. Is the best way to do this to create a 'categories' table and then use the posts and categories models to do has_many :posts, has_many :categories? Would I also then set the routes.rb such that posts are embedded under categories? Or is there an easier way by simply adding a category column to the existing posts table? (in which case I would imagine having multiple categories would be difficult).

    Read the article

< Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >