Search Results

Search found 43979 results on 1760 pages for 'sql down under'.

Page 924/1760 | < Previous Page | 920 921 922 923 924 925 926 927 928 929 930 931  | Next Page >

  • Error connecting ESX 5.0.0 to domain

    - by Saariko
    I am trying to connect an ESX 5.0.0 to our Domain Controler, in order to give a Domain group specific roles security. But I do not see any groups after the host connects to the domain. Under Configuration - Authentication Services - I connected the host to the domain: I created the role I wanted, with the selected approved features But when I want to add a permission to a set of VM's, I can not see "my domain" on the drop down, only the: "localhost" How do I see "my domain" on the Domain drop down - so I can select the domain group to give the role to? To note: I followed the instructions to connect to the domain form VMware site.

    Read the article

  • CUDA 4.1 Particle Update

    - by N0xus
    I'm using CUDA 4.1 to parse in the update of my Particle system that I've made with DirectX 10. So far, my update method for the particle systems is 1 line of code within a for loop that makes each particle fall down the y axis to simulate a waterfall: m_particleList[i].positionY = m_particleList[i].positionY - (m_particleList[i].velocity * frameTime * 0.001f); In my .cu class I've created a struct which I copied from my particle class and is as follows: struct ParticleType { float positionX, positionY, positionZ; float red, green, blue; float velocity; bool active; }; Then I have an UpdateParticle method in the .cu as well. This encompass the 3 main parameters my particles need to update themselves based off the initial line of code. : __global__ void UpdateParticle(float* position, float* velocity, float frameTime) { } This is my first CUDA program and I'm at a loss to what to do next. I've tried to simply put the particleList line in the UpdateParticle method, but then the particles don't fall down as they should. I believe it is because I am not calling something that I need to in the class where the particle fall code use to be. Could someone please tell me what it is I am missing to get it working as it should? If I am doing this completely wrong in general, the please inform me as well.

    Read the article

  • 1st New England Business Intelligence Code Camp

    This is a major Business Intelligence community event for Developers and IT professionals that focus on building real-world BI solutions using the Microsoft Business Intelligence Platform tools and technology on May 22nd 2010! May 22 in Waltham, MA

    Read the article

  • Can we change control of two keys on keyboard?

    - by mr_eclair
    I'm using Eliteook 8440p hp machine and two keys on my keyboard aren't working v and b. laptop keyboard replace will take 2-3 days, and I can't stop my office work. I'm bored of using On screen keyboard to write v and b. I don't have any portable USb keyoard to connect to laptop right now. I'm thinking I'm not using Pg up and Pg Down button at all, is there any software or trick which can make it possible so that If i press Pg Up it will write v on page and Pg Down will write b. Hoping for quick and positive response.

    Read the article

  • Context-specific remap

    - by dotancohen
    I have the following handy VIM map: inoremap ( ()<Left> However, sometimes I will enter Insert mode to add a function call around a variable, like so: Was: $sql = "SELECT * FROM " . $someTable; To: $sql = "SELECT * FROM " . mysql_real_escape_string($someTable); The mapping makes a redundant ) after mysql_real_escape_string(. Is there any way to refactor the mapping so that if there exists a character after the cursor, and the character after the cursor is not whitespace, then )<left> is not appended to (? Thanks.

    Read the article

  • VMWare Lab Manager: What's the best way to build Library Configurations?

    - by mcohen75
    We're using Lab Manager within our QA group. We use it to quickly deliver environments we need for testing. We have 25 Templates, 14 Library Configurations and counting. To build up our templates we: Create a base template that is a bare bones version of Server 2008 + basic configuration (Windows Update, Firewall exceptions) Create a linked clone for each Server template we need (SQL Server 08, 05, etc) Repeat for other OS's, like Windows 7 and Windows XP Then we create configurations: Create a workspace configuration with multiple images in it (Say Server 08 w/SQL Server and Windows 7) Deploy the configuration and make some minor configuration changes Undeploy and Capture to Library How do we keep this manageable? When I need to update a configuration, should I: Rebuild it from templates Clone it to a workspace, make changes, recapture it to the library keep the configuration in my workspace (don't delete it after capturing it to library), deploy it to make changes and then re-capture to library

    Read the article

  • Disaster In The Real World - #2

    Back in April Steve Jones wrote up a disaster at work. Andy had one this week and wrote up the story too. Copy cat! Pretty soon everyone will be having a disaster and writing a story about it! Give these guys credit for letting you see what happens when it ALL goes bad. Disaster recovery is hard to sell and hard to do, reading the article might give you an idea that will save you some time and/or data one day.

    Read the article

  • Exciting DBA and BI role in London for fast growing startup

    - by simonsabin
    One of my clients is looking for a DBA and a BI developer. They are a very exciting dotcom company with cutting edge technology and are growing fast  A bit older than a startup but they still have that feel about them. They are based in North London and are a very nice company to work for, flexible hours, working from home. Plus they are willing to pay for the right candidate. There is at least 1 DBA and 1 BI role going. If you are interested then let me know http://sqlblogcasts.com/blogs/simons...(read more)

    Read the article

  • Efficient algorithm for Virtual Machine(VM) Consolidation in Cloud

    - by devansh dalal
    PROBLEM: We have N physical machines(PMs) each with ram Ri, cpu Ci and a set of currently scheduled VMs each with ram requirement ri and ci respectively Moving(Migrating) any VM from one PM to other has a cost associated which depends on its ram ri. A PM with no VMs is shut down to save power. Our target is to minimize the weighted sum of (N,migration cost) by migrating some VMs i.e. minimize the number of working PMs as well as not to degrade the service level due to excessive migrations. My Approach: Brute Force approach is choosing the minimum loaded PM and try to fit its VMs to other PMs by First Fit Decreasing algorithm or we can select the victim PMs and target PMs based on their loading level and shut down victims if possible by moving their VMs to targets. I tried this Greedy approach on the Data of Baadal(IIT-D cloud) but It isn't giving promising results. I have also tried to study the Ant colony optimization for dynamic VM consolidating but was unable to understand very much. I used the links. http://dumas.ccsd.cnrs.fr/docs/00/72/52/15/PDF/Esnault.pdf http://hal.archives-ouvertes.fr/docs/00/72/38/56/PDF/RR-8032.pdf Would anyone please clarify the solution or suggest any new approach/resources for better performance. I am basically searching for the algorithms not the physical optimizations and I also know that many commercial organizations have provided these solution but I just wanted to know more the underlying algorithms. Thanks in advance.

    Read the article

  • What's better for deploying a website + DB on EC2: 2 small VM or a large one?

    - by devguy
    I'm planning the deployment of a mid-sized website with a SQL Server Standard DB. I've chosen Amazon EC2 to deploy it. I now have to choose between these 2 options: 1) get 2 small instances (1 core each, 1.7 GB of ram each): one for the IIS front-end, one for running the DB. Note: these "small instances" can only run the 32-bit version of Win2008 Server 2) a single large instance (4 cores, 7.5 gb of ram) where I'd install both IIS and the SQL Server. Note: this large instance can only run the 64-bit version of Win2008 Server What's better in terms on performance, scalability, ease of management (launch up a new instance while I backup the principal instance) etc. All suggestions and points of view are welcome!

    Read the article

  • Combine OS partion with data partition on NAS4Free/FreeNAS

    - by Pak
    I recently built a NAS4Free (formerly FreeNAS) machine using a 256MB (yes, MB) USB drive for the OS. When I did the original install, I had the bright idea of making the OS partition just big enough for the OS and a then creating a second partition using the remainder of the drive to store stuff pertaining to the OS. I never really found a use for the data partition and I ended up running out of space on the OS partition, so now I'd like to combine the partitions into a single partition. Is this something that is possible to do while everything is up and running? If it comes down to it, I can take down the machine and do a fresh install of the OS using the entire space of the USB drive, but I'd like to use this as an opportunity to better familiarize myself with FreeBSD/UNIX type systems. If this is possible, will it interfere with the NAS4Free things? The data partition shows up in the web interface under the disks section. If I end up manually changing the partitions, I'd be concerned with NAS4Free getting confused by the missing partition.

    Read the article

  • TSQL Challenge 31 - Managing multiple overlapping date intervals.

    This challenge is adapted from a budgeting system used in a large company to perform quarterly analysis of what kind of work will be done and where it will be done. Project Managers make plans and the estimated hours of work required from each employee each month end up in a central database. Top managers want to see a synthesis of this by department and profession

    Read the article

  • Rules of Holes -#2: You Are Still in a Hole

    - by ArnieRowland
    OK. So you followed the First Rule of Holes -you stopped digging yourself in deeper. But now what? You are still in a Hole. Your situation has not changed much, but at least you are no longer making it worse. You need to redirect the digging effort into escape and avoidance efforts. The Hole has a singular purpose -consuming all of your time and effort. AND it has succeeded! But now you are going to redirect your efforts for your own survival. You need to look around, take stock of the situation....(read more)

    Read the article

  • Did you know documentation is built-in to usp_ssiscatalog?

    - by jamiet
    I am still working apace on updates to my open source project SSISReportingPack, specifically I am working on improvements to usp_ssiscatalog which is a stored procedure that eases the querying and exploration of the data in the SSIS Catalog. In this blog post I want to share a titbit of information about usp_ssiscatalog, that all the actions that you can take when you execute usp_ssiscatalog are documented within the stored procedure itself. For example if you simply execute EXEC usp_ssiscatalog @action='exec' in SSMS then switch over to the messages tab you will see some information about the action: OK, that’s kinda cool. But what if you only want to see the documentation and don’t actually want any action to take place. Well you can do that too using the @show_docs_only parameter like so: EXEC dbo.usp_ssiscatalog @a='exec',@show_docs_only=1; That will only show the documentation. Wanna read all of the documentation? That’s simply: EXEC dbo.usp_ssiscatalog @a='exec',@show_docs_only=1; EXEC dbo.usp_ssiscatalog @a='execs',@show_docs_only=1; EXEC dbo.usp_ssiscatalog @a='configure',@show_docs_only=1; EXEC dbo.usp_ssiscatalog @a='exec_created',@show_docs_only=1; EXEC dbo.usp_ssiscatalog @a='exec_running',@show_docs_only=1; EXEC dbo.usp_ssiscatalog @a='exec_canceled',@show_docs_only=1; EXEC dbo.usp_ssiscatalog @a='exec_failed',@show_docs_only=1; EXEC dbo.usp_ssiscatalog @a='exec_pending',@show_docs_only=1; EXEC dbo.usp_ssiscatalog @a='exec_ended_unexpectedly',@show_docs_only=1; EXEC dbo.usp_ssiscatalog @a='exec_succeeded',@show_docs_only=1; EXEC dbo.usp_ssiscatalog @a='exec_stopping',@show_docs_only=1; EXEC dbo.usp_ssiscatalog @a='exec_completed',@show_docs_only=1; I hope that comes in useful for you sometime. Have fun exploring the documentation on usp_ssiscatalog. If you think the documentation can be improved please do let me know. @jamiet

    Read the article

  • Managing Confidence

    - by andyleonard
    Introduction This post is the fifty-third part of a ramble-rant about the software business. The current posts in this series can be found on the series landing page . This post is about inspiring others. Hot Chicks - Baby chickens beneath a warming lamp… </NonSubtleSEOPloy> For those who do not know, we raise chickens that laying eggs – referred to as “laying hens”. Natural attrition has taken our flock of laying hens to 11, plus one rooster. We recently received an order of new chicks (pictured...(read more)

    Read the article

  • Performance Tuning in the Age of Big Data

    Database Administrators must now deal with large volumes of data and new forms of high-speed data analysis. If your responsibility includes performance tuning, here are the areas to focus on that will become more and more important in the age of Big Data. Total DeploymentEnjoy easy release management for your .NET apps, services, and databases with Deployment Manager. Get your free Starter edition now

    Read the article

  • Visual Studio 2010: Is it possible to force editor to use ANSI rather than UTF-8?

    - by Mark Redman
    I am having issues with some files in automated processes, specifically with batch files and sql files. Visual Studio seems to create these as UTF-8 rather than ansi and adds some kind of special characters to the beginning of the file (I think this is a called a pre-amble) This breaks running batch files and running swl files through osql.exe. I have had issues myself in the past in creating text files using C#, but can get around that through encoding. However its seems a bit strange I cant use Visual studio to create batch files and sql files in a database project for automation.

    Read the article

  • Binding keys from specific device in X.org

    - by Michal Cihar
    I have a remote control for presentations, which generates Next/Prior key events in X.org (Page up/down). I'd like to use these for navigating in playlist (using MPD, but it probably does not matter). The problem is that I want to make this control work all the time (without application having focus) and I don't want to lose Page up/down functionality from normal keyboard. Is there some application which would allow me to bind actions to events from specific keyboard? Or is there simple way to implement such thing on my own?

    Read the article

  • Using Subjects to Deploy Queries Dynamically

    - by Roman Schindlauer
    In the previous blog posting, we showed how to construct and deploy query fragments to a StreamInsight server, and how to re-use them later. In today’s posting we’ll integrate this pattern into a method of dynamically composing a new query with an existing one. The construct that enables this scenario in StreamInsight V2.1 is a Subject. A Subject lets me create a junction element in an existing query that I can tap into while the query is running. To set this up as an end-to-end example, let’s first define a stream simulator as our data source: var generator = myApp.DefineObservable(     (TimeSpan t) => Observable.Interval(t).Select(_ => new SourcePayload())); This ‘generator’ produces a new instance of SourcePayload with a period of t (system time) as an IObservable. SourcePayload happens to have a property of type double as its payload data. Let’s also define a sink for our example—an IObserver of double values that writes to the console: var console = myApp.DefineObserver(     (string label) => Observer.Create<double>(e => Console.WriteLine("{0}: {1}", label, e)))     .Deploy("ConsoleSink"); The observer takes a string as parameter which is used as a label on the console, so that we can distinguish the output of different sink instances. Note that we also deploy this observer, so that we can retrieve it later from the server from a different process. Remember how we defined the aggregation as an IQStreamable function in the previous article? We will use that as well: var avg = myApp     .DefineStreamable((IQStreamable<SourcePayload> s, TimeSpan w) =>         from win in s.TumblingWindow(w)         select win.Avg(e => e.Value))     .Deploy("AverageQuery"); Then we define the Subject, which acts as an observable sequence as well as an observer. Thus, we can feed a single source into the Subject and have multiple consumers—that can come and go at runtime—on the other side: var subject = myApp.CreateSubject("Subject", () => new Subject<SourcePayload>()); Subject are always deployed automatically. Their name is used to retrieve them from a (potentially) different process (see below). Note that the Subject as we defined it here doesn’t know anything about temporal streams. It is merely a sequence of SourcePayloads, without any notion of StreamInsight point events or CTIs. So in order to compose a temporal query on top of the Subject, we need to 'promote' the sequence of SourcePayloads into an IQStreamable of point events, including CTIs: var stream = subject.ToPointStreamable(     e => PointEvent.CreateInsert<SourcePayload>(e.Timestamp, e),     AdvanceTimeSettings.StrictlyIncreasingStartTime); In a later posting we will show how to use Subjects that have more awareness of time and can be used as a junction between QStreamables instead of IQbservables. Having turned the Subject into a temporal stream, we can now define the aggregate on this stream. We will use the IQStreamable entity avg that we defined above: var longAverages = avg(stream, TimeSpan.FromSeconds(5)); In order to run the query, we need to bind it to a sink, and bind the subject to the source: var standardQuery = longAverages     .Bind(console("5sec average"))     .With(generator(TimeSpan.FromMilliseconds(300)).Bind(subject)); Lastly, we start the process: standardQuery.Run("StandardProcess"); Now we have a simple query running end-to-end, producing results. What follows next is the crucial part of tapping into the Subject and adding another query that runs in parallel, using the same query definition (the “AverageQuery”) but with a different window length. We are assuming that we connected to the same StreamInsight server from a different process or even client, and thus have to retrieve the previously deployed entities through their names: // simulate the addition of a 'fast' query from a separate server connection, // by retrieving the aggregation query fragment // (instead of simply using the 'avg' object) var averageQuery = myApp     .GetStreamable<IQStreamable<SourcePayload>, TimeSpan, double>("AverageQuery"); // retrieve the input sequence as a subject var inputSequence = myApp     .GetSubject<SourcePayload, SourcePayload>("Subject"); // retrieve the registered sink var sink = myApp.GetObserver<string, double>("ConsoleSink"); // turn the sequence into a temporal stream var stream2 = inputSequence.ToPointStreamable(     e => PointEvent.CreateInsert<SourcePayload>(e.Timestamp, e),     AdvanceTimeSettings.StrictlyIncreasingStartTime); // apply the query, now with a different window length var shortAverages = averageQuery(stream2, TimeSpan.FromSeconds(1)); // bind new sink to query and run it var fastQuery = shortAverages     .Bind(sink("1sec average"))     .Run("FastProcess"); The attached solution demonstrates the sample end-to-end. Regards, The StreamInsight Team

    Read the article

  • Ranking with PowerPivot – a different approach

    - by Marco Russo (SQLBI)
    Alberto Ferrari wrote an interesting post about a “different approach” in creating a ranking measure with PowerPivot . If you know DAX or you read our book , you will find that a DAX expression can solve the issue. However, such a formula is more complex than necessary. The next version of PowerPivot might have more built-in DAX functions and should solve the ranking need with a simpler formula. In the meantime, it is interesting to know a different approach that relies on Excel skills instead of...(read more)

    Read the article

  • Repository query conditions, dependencies and DRY

    - by vFragosop
    To keep it simple, let's suppose an application which has Accounts and Users. Each account may have any number of users. There's also 3 consumers of UserRepository: An admin interface which may list all users Public front-end which may list all users An account authenticated API which should only list it's own users Assuming UserRepository is something like this: class UsersRepository extends DatabaseAbstraction { private function query() { return $this->database()->select('users.*'); } public function getAll() { return $this->query()->exec(); } // IMPORTANT: // Tons of other methods for searching, filtering, // joining of other tables, ordering and such... } Keeping in mind the comment above, and the necessity to abstract user querying conditions, How should I handle querying of users filtering by account_id? I can picture three possible roads: 1. Should I create an AccountUsersRepository? class AccountUsersRepository extends UserRepository { public function __construct(Account $account) { $this->account = $account; } private function query() { return parent::query() ->where('account_id', '=', $this->account->id); } } This has the advantage of reducing the duplication of UsersRepository methods, but doesn't quite fit into anything I've read about DDD so far (I'm rookie by the way) 2. Should I put it as a method on AccountsRepository? class AccountsRepository extends DatabaseAbstraction { public function getAccountUsers(Account $account) { return $this->database() ->select('users.*') ->where('account_id', '=', $account->id) ->exec(); } } This requires the duplication of all UserRepository methods and may need another UserQuery layer, that implements those querying logic on chainable way. 3. Should I query UserRepository from within my account entity? class Account extends Entity { public function getUsers() { return UserRepository::findByAccountId($this->id); } } This feels more like an aggregate root for me, but introduces dependency of UserRepository on Account entity, which may violate a few principles. 4. Or am I missing the point completely? Maybe there's an even better solution? Footnotes: Besides permissions being a Service concern, in my understanding, they shouldn't implement SQL query but leave that to repositories since those may not even be SQL driven.

    Read the article

  • Router not assigning an IP address after installing OS X 10.6

    - by Vaibhav Bajpai
    I recently installed Mac OS X 10.6, however the ethernet state is down. The assigned IP is 169.x.x.x. When booted the live USB of Ubuntu, I properly get an IP assigned in the range 192.168.1.x from the router 192.168.1.1 I am using the same router and same ethernet line. I tried to ping to 192.168.1.1 from my Mac and I get a host down message. I tried to manually assign the IP and set the router IP to 192.168.1.1 but still the router is unreachable.

    Read the article

< Previous Page | 920 921 922 923 924 925 926 927 928 929 930 931  | Next Page >