Search Results

Search found 117232 results on 4690 pages for 'sql user group'.

Page 274/4690 | < Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >

  • Getting user data from Active Directory using PL/SQL

    - by David Neale
    I had a discussion today regarding an Oracle procedure I wrote some time ago. I wanted to get 7500 user email addresses from Active Directory using PL/SQL. AD will return a maximum of 1000 rows and the LDAP provider used by Oracle will not support paging. Therefore, my solution was to filter on the last two characters of the sAMAccountName (*00,*01,*02...etc.). This results in 126 queries (100 for account names ending in digits, 26 for those ending in a letter...this was sufficient for my AD setup). The person I was speaking to (it was a job interview by the way) said he could have done it a better way, but he would not tell me what that method was. Could anybody hazard a guess at what this method was?

    Read the article

  • created persisted computed columns when the user defined scalar function appears to be non-determini

    - by Ralph Shillington
    I have a scalar UDF that I know to be deterministic, however SQL doesn't. Is there a way to declare it as deterministic so that I can then use it in a persisted computed column definition? further clarification: The purpose of this exercise is that I need to harvest out specific values from an XML column on the row. I can't use the value method of the xml column in my computed column definition, but I can use it in a UDF. I know the xpath query in the value method will produce the same output give the same input so while I certainly understand that not all calls to value will be deterministic I want to assert that mine is.

    Read the article

  • Fünf Jahre Bonn-to-Code.Net – das muss gefeiert werden!

    - by WeigeltRo
    Als ich am 1. Januar 2006 die .NET User Group “Bonn-to-Code.Net” gründete (den genialen Namen ließ sich mein Kollege Jens Schaller in Anlehnung an das Motto meines Blogs einfallen), ahnte ich nicht, wie schnell sich alles entwickeln würde. So konnte, nach ein wenig Werbung über diverse Kanäle, bereits am 14. Februar 2006 das erste Treffen stattfinden und wenige Tage später wurde Bonn-to-Code.Net offiziell in den Kreis der INETA User Groups aufgenommen. Das ist nun etwas über fünf Jahre her und soll am 22. März 2011 um 19:00 (Einlass ab 18:30) gebührend gefeiert werden, und zwar im Rahmen unseres März-Treffens. Der Abend bietet Vorträge zu “Flow Design und seine Umsetzung mit Event Based Components” sowie “WCF Services mal anders” (ausführlichere Infos zu den Vortragsinhalten gibt es hier). Anschließend gibt es bei einer großen Verlosung neben Büchern auch hochkarätige Software-Preise zu gewinnen. Zusätzlich zu Lizenzen für JetBrains ReSharper und Telerik Ultimate Collection warten dieses Mal (mit freundlicher Unterstützung durch Microsoft Deutschland) je ein Windows 7 Ultimate und ein Office 2010 Professional Plus auf ihre glücklichen Gewinner. Und wer nicht zu spät kommt, kann auch ganz ohne Losglück eines von vielen kleinen Goodies abgreifen. Eine Anmeldung ist nicht erforderlich, eine Anfahrtsbeschreibung gibt es auf der Bonn-to-Code.Net Website. Es freut mich dabei besonders, dass wir zu diesem Termin u.a. einen Sprecher an Bord haben, der bereits beim Gründungstreffen dabei war: Stefan Lieser. Mittlerweile z.B. durch die Clean Code Developer Initiative bekannt, ist Stefan nur ein Beispiel für eine ganze Reihe von Sprechern auf den diversen Entwicklerkonferenzen, die ihre ersten Erfahrungen u.a. bei Bonn-to-Code.Net gemacht haben. …und was ist in den fünf Jahren so passiert? Einiges! Ein Community Launch Event in 2007, zwei Microsoft TechTalks (2007,2008), Gastsprecher aus ganz Deutschland und dem Ausland (JP Boodhoo, Harry Pierson). Doch nichts hat die fünf Jahre so geprägt wie die Zusammenarbeit mit “den Nachbarn aus Köln”. Zum Zeitpunkt der Gründung von Bonn-to-Code.Net gab es im gesamten Köln/Bonner Raum keine .NET User Group. Und so war es nicht ungewöhnlich, dass der erste Interessent, der sich auf meinen Blog-Eintrag vom 4. Januar 2006 hin meldete, aus Köln stammte: Albert Weinert. Kurze Zeit nach der Bonner Gruppe wurde dann – initiiert durch Angelika Wöpking und Stefan Lange – schließlich die .NET User Group Köln gegründet. Wobei Stefan wiederum vor dem Kölner Gründungstreffen Ende April bereits Bonner Treffen besucht hatte; insgesamt also eine Menge personeller Überlapp zwischen Köln und Bonn. Als nach einem etwas holprigen Start der Kölner Gruppe schließlich Albert und Stefan die Leitung übernahmen, war klar dass Köln und Bonn in vielerlei Hinsicht eng zusammenarbeiten würden. Sei es durch die Koordination von Themen und Terminen oder auch durch Werbung für die Treffen der jeweils anderen Gruppe. Der nächste Schritt kam dann mit der Beteiligung der Kölner und Bonner Gruppen an der Organisation des “AfterLaunch” im April 2008. Der große Erfolg dieser Veranstaltung war der Ansporn, in Bezug auf die Zusammenarbeit ein neues Kapitel aufzuschlagen. Anfang 2009 wurde zunächst der dotnet Köln/Bonn e.V. gegründet, um für eigene Großveranstaltungen ein solides Fundament zu schaffen. Im Mai 2009 folgte dann die erste “dotnet Cologne” – ein voller Erfolg. Und mit der “dotnet Cologne 2010” etablierte sich diese Konferenz als das große .NET Community Event in Deutschland. Am 6. Mai 2011 findet nun die “dotnet Cologne 2011” statt; hinter den Kulissen laufen die Vorbereitungen dazu bereits seit Monaten auf Hochtouren. Alles in allem sehr aufregende fünf Jahre, in denen viel passiert ist. Mal schauen, wie die nächsten fünf Jahre werden…

    Read the article

  • September Independent Oracle User Group (IOUG) Regional Events:

    - by Mandy Ho
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} September 5, 2012 – Denver, CO Oracle 11g Database Upgrade Seminar Join Roy Swonger, Senior Director of software development at Oracle to learn about upgrading to Oracle Database 11g. Topics include: All the required preparatory steps Database upgrade strategies Post-upgrade performance analysis Helpful tips and common pitfalls to watch out for http://www.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=152242&src=7598177&src=7598177&Act=4 September 6, 2012 – Salt Lake City, UT Fall Symposium 2012 Plan to join us for our annual fall event on Sept 6. They day will be filled with learning and networking with tracks focused on Applications, APEX, BI, Development and DBA Topics. This event is free for UTOUG members to attend, but please register. http://www.utoug.org/apex/f?p=972:2:6686308836668467::::P2_EVENT_ID:121 September 6, 2012 – Portland, OR Oracle’s Hands on Workshop Series focused on providing Defense-in-Depth Solutions to secure data at the source, reduce risk and simplify compliance The Oracle Database Security Workshop is a one-day hands-on session for IT Managers, IT Security Architects and Oracle DBAs who are looking for solutions to address their information protection, privacy, and accountability challenges within their Oracle database environment. Most security programs offered today fail toadequately address database security. Customers continue to be challenged tosecure information against loss and protect the integrity of sensitiveinformation like critical financial data, personally identifiable information(PII) and credit card data for PCI compliance. http://nwoug.org/content.aspx?page_id=87&club_id=165905&item_id=241082 September 11, 2012 – Montreal, QC APEXposed! For APEX aficionados – join ODTUG in Montreal, September 11-12 for APEXposed! Topics will include Dynamic Actions, Plug-ins, Tuning, and Building Mobile Apps. The cost is $399 US and early registration ends August 15th. For more information: http://www.odtugapextraining.com  September 11, 2012 – Philadelphia, PA Big Data & What are we still doing wrong with Tom Kyte Tom Kyte is a Senior Technical Architect in Oracle's Server Technology Division. Tom is the Tom behind the AskTom column in Oracle Magazine and is also the author of Expert Oracle Database Architecture (Apress, 2005/2009) among other books Abstract: Big Data The term "big data" draws a lot of attention, but behind the hype there's a simple story. For decades, companies have been making business decisions based on transactional data stored in relational databases. However, beyond that critical data is a potential treasure trove of less structured data: weblogs, social media, email, sensors, and photographs that can be mined for useful information. This presentation will take a look at what Big Data is and means - and Oracle's strategy for handling it Abstract: What are we still doing wrong? I've given many best practices presentations in the last 10 years. I've given many worst practices presentations in the last 10 years. I've seen some things change over the last ten years and many other things stay exactly the same. In this talk - we'll be taking a look at the good and the bad - what we do right and what we continue to do wrong over and over again. We'll look at why "Why" is probably the right initial answer to most any question. We'll look at how we get to "Know what we Know", and why that can be both a help and a hindrance. We'll peek at "Best Practices" and tie them into what I term "Worst Practices". In short, a talk on the good and the bad. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} http://ioug.itconvergence.com/pls/apex/f?p=207:27:3669516430980563::NO September 12, 2012- New York, NY NYOUG Fall General Meeting “Trends in Database Administration and Why the Future of Database Administration is the Vdba” http://www.nyoug.org/upcoming_events.htm#General_Meeting1 September 21, 2012 – Cleveland, OH Oracle Database 11g for Developers: What You need to know or Oracle Database 11g New Features for Developers Attendees are introduced to the new and improved features of Oracle 11g (both Oracle 11g R1 and Oracle 11g R2) that directly impact application development. Special emphasis is placed on features that reduce development time, make development simpler, improve performance, or speed deployment. Specific topics include: New SQL functions, virtual columns, result caching, XML improvements, pivot statements, JDBC improvements, and PL/SQL enhancements such as compound triggers. http://www.neooug.org/ September 24, 2012 – Ottawa, ON Introduction to Oracle Spatial The free Oracle Locator functionality, and the Oracle Spatial option which dramatically extends Locator, are very useful, but poorly understood capabilities of the database. In the afternoon we will extend into additional areas selected from: storage and performance; answering business problems with spatial queries; using Oracle Maps in OBIEE; an overview and capabilities of Oracle Topology; under the covers with GeoCoding. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} http://www.oug-ottawa.org/pls/htmldb/f?p=327:27:4209274028390246::NO

    Read the article

  • Who writes the words? A rant with graphs.

    - by Roger Hart
    If you read my rant, you'll know that I'm getting a bit of a bee in my bonnet about user interface text. But rather than just yelling about the way the world should be (short version: no UI text would suck), it seemed prudent to actually gather some data. Rachel Potts has made an excellent first foray, by conducting a series of interviews across organizations about how they write user interface text. You can read Rachel's write up here. She presents the facts as she found them, and doesn't editorialise. The result is insightful, but impartial isn't really my style. So here's a rant with graphs. My method, and how it sucked I sent out a short survey. Survey design is one of my hobby-horses, and since some smartarse in the comments will mention it if I don't, I'll step up and confess: I did not design this one well. It was potentially ambiguous, implicitly excluded people, and since I only really advertised it on Twitter and a couple of mailing lists the sample will be chock full of biases. Regardless, these were the questions: What do you do? Select the option that best describes your role What kind of software does your organization make? (optional) In your organization, who writes the text on your software user interfaces? (for example: button names, static text, tooltips, and so on) Tick all that apply. In your organization who is responsible for user interface text? Who "owns" it? The most glaring issue (apart from question 3 being a bit broken) was that I didn't make it clear that I was asking about applications. Desktop, mobile, or web, I wouldn't have minded. In fact, it might have been interesting to categorize and compare. But a few respondents commented on the seeming lack of relevance, since they didn't really make software. There were some other issues too. It wasn't the best survey. So, you know, pinch of salt time with what follows. Despite this, there were 100 or so respondents. This post covers the overview, and you can look at the raw data in this spreadsheet What did people do? Boring graph number one: I wasn't expecting that. Given I pimped the survey on twitter and a couple of Tech Comms discussion lists, I was more banking on and even Content Strategy/Tech Comms split. What the "Others" specified: Three people chipped in with Technical Writer. Author, apparently, doesn't cut it. There's a "nobody reads the instructions" joke in there somewhere, I'm sure. There were a couple of hybrid roles, including Tech Comms and Testing, which sounds gruelling and thankless. There was also, an Intranet Manager, a Creative Director, a Consultant, a CTO, an Information Architect, and a Translator. That's a pretty healthy slice through the industry. Who wrote UI text? Boring graph number two: Annoyingly, I made this a "tick all that apply" question, so I can't make crude and inflammatory generalizations about percentages. This is more about who gets involved in user interface wording. So don't panic about the number of developers writing UI text. First off, it just means they're involved. Second, they might be good at it. What? It could happen. Ours are involved - they write a placeholder and flag it to me for changes. Sometimes I don't make any. It's also not surprising that there's so much UX in the mix. Some of that will be people taking care, and crafting an understandable interface. Some of it will be whatever text goes on the wireframe making it into production. I'm going to assume that's what happened at eBay, when their iPhone app purportedly shipped with the placeholder text "Some crappy content goes here". Ahem. Listing all 17 "other" responses would make this post lengthy indeed, but you can read them in the raw data spreadsheet. The award for the approach that sounds the most like a good idea yet carries the highest risk of ending badly goes to whoever offered up "External agencies using focus groups". If you're reading this, and that actually works, leave a comment. I'm fascinated. Who owned UI text Stop. Bar chart time: Wow. Let's cut to the chase, and by "chase", I mean those inflammatory generalizations I was talking about: In around 60% of cases the person responsible for user interface text probably lacks the relevant expertise. Even in the categories I count as being likely to have relevant skills (Marketing Copywriters, Content Strategists, Technical Authors, and User Experience Designers) there's a case for each role being unsuited, as you'll see in Rachel's blog post So it's not as simple as my headline. Does that mean that you personally, Mr Developer reading this, write bad button names? Of course not. I know nothing about you. It rather implies that as a category, the majority of people looking after UI text have neither communication nor user experience as their primary skill set, and as such will probably only be good at this by happy accident. I don't have a way of measuring those frequency of those accidents. What the Others specified: I don't know who owns it. I assume the project manager is responsible. "copywriters" when they wish to annoy me. the client's web maintenance person, often PR or MarComm That last one chills me to the bone. Still, at least nobody said "the work experience kid". You can see the rest in the spreadsheet. My overwhelming impression here is of user interface text as an unloved afterthought. There were fewer "nobody" responses than I expected, and a much broader split. But the relative predominance of developers owning and writing UI text suggests to me that organizations don't see it as something worth dedicating attention to. If true, that's bothersome. Because the words on the screen, particularly the names of things, are fundamental to the ability to understand an use software. It's also fascinating that Technical Authors and Content Strategists are neck and neck. For such a nascent discipline, Content Strategy appears to have made a mark on software development. Or my sample is skewed. But it feels like a bit of validation for my rant: Content Strategy is eating Tech Comms' lunch. That's not a bad thing. Well, not if the UI text is getting done well. And that's the caveat to this whole post. I couldn't care less who writes UI text, provided they consider the user and don't suck at it. I care that it may be falling by default to people poorly disposed to doing it right. And I care about that because so much user interface text sucks. The most interesting question Was one I forgot to ask. It's this: Does your organization have technical authors/writers? Like a lot of survey data, that doesn't tell you much on its own. But once we get a bit dimensional, it become more interesting. So taken with the other questions, this would have let me find out what I really want to know: What proportion of organizations have Tech Comms professionals but don't use them for UI text? Who writes UI text in their place? Why this happens? It's possible (feasible is another matter) that hundreds of companies have tech authors who don't work on user interfaces because they've empirically discovered that someone else, say the Marketing Copywriter, is better at it. And once we've all finished laughing, I'll point out that I've met plenty of tech authors who just aren't used to thinking about users at the point of need in the way UI text and embedded user assistance require. If you've got what I regard, perhaps unfairly, as the bad kind of tech author - the old-school kind with the thousand-page pdf and the grammar obsession - if you've got one of those then you probably are better off getting the UX folk or the copywriters to do your UI text. At the very least, they'll derive terminology from user research.

    Read the article

  • What's the best way to do user profile/folder redirect/home directory archiving?

    - by tpederson
    My company is in dire need of a redesign around how we handle user account administration. I've been tasked with automating the process. The end goal is to have the whole works triggered by the business, and IT only looking in when there's an error reported. The interim phase is going to be semi-manual. That is a level 2 tech inputs the user's info and supervises the process. The current hurdle I'm facing is user profile archiving. Our security team requires us to archive the profile directories for any terminated user for 60 days in case the legal team requires access to their files. Our AD is as much a mess as everything else, so there are some users with home directories and some with profiles. Anyone who has a profile dir in AD also has a good deal of their profile redirected to our file servers over DFS. In order to complete the process manually you find the user in AD, disable them, find their home/profile dir, go there and take ownership, create an archive folder, move all their files over, then delete the old dir. Some users have many many gigs of nonsense and this can take quite some time. Even automated the process would not be a quick one. I'm thinking that I need to have a client side C# GUI for the quick stuff and some server side batch script or console app to offload this long running process. I have a batch script that works decently using takeown and robocopy, but I wonder if a C# console app would do a better job. So, my question at long last is, what do you think is the best way to handle this? I can't imagine this is a unique problem, how do other admins get this done? The last place I worked was easily 10x larger than the place I'm in now. If we would have been doing this manual crap there, they'd have needed a team of at least 30 full time workers to keep up. I have decent skills in C#.net and batch scripting, but am a quick study and I have used most every language once or twice. Thank you for reading this and I look forward to seeing what imaginative solutions you all can come up with.

    Read the article

  • ODBC continually prompts for password

    - by doublej92
    I have an application built in Access 2003 that uses a system DSN ODBC to connect to a SQL Server. The ODBC uses SQL authentication. When the application is started, the user is prompted to authenticate into the database. I have another computer set up within the same domain that has Access 2007 installed on it. I log in using the same credentials that I use to get on the machine that has Access 2003. I converted my application to Access 2007 format and everything works fine. However, when other users try to use the application, they are prompted to enter the database password every time a table is accessed. Thinking it was a problem with my ODBC, I confirmed that the connections were set up the same way on both of my machines, and the user's machine. Here is the interesting part, when the user logged into my machine, it started prompting for the password every time. When I logged into the user's machine, the application worked fine. Anyone have any ideas? All help is appreciated!

    Read the article

  • Django queryset not returning the same values as the generated sql

    - by HRCerqueira
    Hello guys, I have the following queryset: subscribers = User.objects.values('email', 'username').filter( Q(subscription_settings__new_question='i') | Q(subscription_settings__new_question_watched_tags='i', marked_tags__id__in=question.tags.values('id'), tag_selections__reason='good') ).exclude(id=question.author.id) The problem is that when I evaluate the query I get only the values that are filtered by the first Q object (even if I reverse the order of the objects). So lets say that I was expecting the user A, B, C and D, where A and B are filtered by the first Q object and C and D by the second. But the queryset only returns A and B. I used the django debug toolbar to see the query that was actually being executed (and then I used a direct print statement like "print subscriber.query.as_sql()" just to be sure) and then evaluated the query directly using psql (I'm using postgres by the way), and I get the results I expect. Here's the query btw: SELECT "auth_user"."email", "auth_user"."username" FROM "auth_user" LEFT OUTER JOIN "forum_markedtag" ON ("auth_user"."id" = "forum_markedtag"."user_id") INNER JOIN "forum_defaultsubscriptionsetting" ON ("auth_user"."id" = "forum_defaultsubscriptionsetting"."user_id") WHERE ((("forum_markedtag"."reason" = E'good' AND "forum_defaultsubscriptionsetting"."new_question_watched_tags" = E'i' AND "forum_markedtag"."tag_id" IN (SELECT U0."id" FROM "tag" U0 INNER JOIN "question_tags" U1 ON (U0."id" = U1."tag_id") WHERE U1."question_id" = 64 )) OR "forum_defaultsubscriptionsetting"."new_question" = E'i' ) AND NOT ("auth_user"."id" = 10 )) Thanks in advance EDIT: I tried to break the queryset into two, one that uses the first Q object as the filter and another one with the second Q object, and although the second queryset produces the correct sql that returns the correct values when evaluated directly, it still doesn't return nothing when evaluated as a django queryset. heres the alternative code: subscribers = User.objects.values('email', 'username').filter( subscription_settings__new_question='i').exclude(id=question.author.id) subscribers2 = User.objects.values('email', 'username').filter( subscription_settings__new_question_watched_tags='i', marked_tags__id__in=question.tags.values('id'), tag_selections__reason='good').exclude(id=question.author.id)

    Read the article

  • SQL Express under IIS 7.5

    - by fampinheiro
    I´m developing a web service that access a SQL Express database, it works very well in the Visual Studio host but when i deploy it to IIS 7.5 i get this exception. Please help me. Stack Trace: System.Data.EntityException: The underlying provider failed on Open. ---> System.Data.SqlClient.SqlException: Failed to generate a user instance of SQL Server due to failure in retrieving the user's local application data path. Please make sure the user has a local user profile on the computer. The connection will be closed. at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlInternalConnectionTds.CompleteLogin(Boolean enlistOK) at System.Data.SqlClient.SqlInternalConnectionTds.AttemptOneLogin(ServerInfo serverInfo, String newPassword, Boolean ignoreSniOpenTimeout, Int64 timerExpire, SqlConnection owningObject) at System.Data.SqlClient.SqlInternalConnectionTds.LoginNoFailover(String host, String newPassword, Boolean redirectedUserInstance, SqlConnection owningObject, SqlConnectionString connectionOptions, Int64 timerStart) at System.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(SqlConnection owningObject, SqlConnectionString connectionOptions, String newPassword, Boolean redirectedUserInstance) at System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, Object providerInfo, String newPassword, SqlConnection owningObject, Boolean redirectedUserInstance) at System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection(DbConnection owningConnection, DbConnectionPool pool, DbConnectionOptions options) at System.Data.ProviderBase.DbConnectionPool.CreateObject(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.UserCreateRequest(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.SqlClient.SqlConnection.Open() at System.Data.EntityClient.EntityConnection.OpenStoreConnectionIf(Boolean openCondition, DbConnection storeConnectionToOpen, DbConnection originalConnection, String exceptionCode, String attemptedOperation, Boolean& closeStoreConnectionOnFailure) --- End of inner exception stack trace --- at System.Data.EntityClient.EntityConnection.OpenStoreConnectionIf(Boolean openCondition, DbConnection storeConnectionToOpen, DbConnection originalConnection, String exceptionCode, String attemptedOperation, Boolean& closeStoreConnectionOnFailure) at System.Data.EntityClient.EntityConnection.Open() at System.Data.Objects.ObjectContext.EnsureConnection() at System.Data.Objects.ObjectQuery`1.GetResults(Nullable`1 forMergeOption) at System.Data.Objects.ObjectQuery`1.System.Collections.Generic.IEnumerable<T>.GetEnumerator() at System.Linq.Enumerable.FirstOrDefault[TSource](IEnumerable`1 source) at WSCinema.CinemaService.Movie() in D:\Documents\My Dropbox\Projects\sd.v0910\trab3\code\WSCinema\CinemaService.asmx.cs:line 46

    Read the article

  • using indexer to retrieve Linq to SQL object from datastore

    - by fearofawhackplanet
    class UserDatastore : IUserDatastore { ... public IUser this[Guid userId] { get { User user = (from u in _dataContext.Users where u.Id == userId select u).FirstOrDefault(); return user; } } ... } One of the developers in our team is arguing that an indexer in the above situation is not appropriate and that a GetUser(Guid id) method should be prefered. The arguments being that: 1) We aren't indexing into an in-memory collection, the indexer is basically performing a hidden SQL query 2) Using a Guid in an indexer is bad (FxCop flagged this also) 3) Returning null from an indexer isn't normal behaviour 4) An API user generally wouldn't expect any of this behaviour I agree to an extent with (most of) these points. But I'm also inclined to argue that one of the characteristics of Linq is to abstract the database access to make it appear that you're simply working with a bunch of collections, even though the lazy evaluation paradigm means those collections aren't evaluated until you run a query over them. It doesn't seem inconsistent to me to access the datastore in the same manner as if it was a concrete in-memory collection here. Also bearing in mind this is an inherited codebase which uses this pattern extensively and consistently, is it worth the refactoring? I accept that it might have been better to use a Get method from the start, but I'm not yet convinced that it's completely incorrect to be using an indexer. I'd be interested to hear all opinions, thanks.

    Read the article

  • How to record different authentication types (username / password vs token based) in audit log

    - by RM
    I have two types of users for my system, normal human users with a username / password, and delegation authorized accounts through OAuth (i.e. using a token identifier). The information that is stored for each is quite different, and are managed by different subsytems. They do however interact with the same tables / data within the system, so I need to maintain the audit trail regardless of whether human user, or token-based user modified the data. My solution at the moment is to have a table called something like AuditableIdentity, and then have the two types inheriting off that table (either in the single table, or as two seperate tables with 1 to 1 PK with AuditableIdentity. All operations would use the common AuditableIdentity PK for CreatedBy, ModifiedBy etc columns. There isn't any FK constraint on the audit columns, so any text can go in there, but I want an easy way to easily determine whether it was a human or system that made the change, and joining to the one AuditableIdentity table seems like a clean way to do that? Is there a best practice for this scenario? Is this an appropriate way of approaching the problem - or would you not bother with the common table and just rely on joins (to the two seperate un-related user / token tables) later to determine which user type matches which audit records?

    Read the article

  • Sql query - selecting top 5 rows and further selecting rows only if User is present

    - by Gublooo
    Hello, I kind of stuck on how to implement this query - this is pretty similar to the query I posted earlier but I'm not able to crack it. I have a shopping table where everytime a user buys anything, a record is inserted. Some of the fields are * shopping_id (primary key) * store_id * user_id Now what I need is to pull only the list of those stores where he's among the top 5 visitors: When I break it down - this is what I want to accomplish: * Find all stores where this UserA has visited * For each of these stores - see who the top 5 visitors are. * Select the store only if UserA is among the top 5 visitors. The corresponding queries would be: select store_id from shopping where user_id = xxx select user_id,count(*) as 'visits' from shopping where store_id in (select store_id from shopping where user_id = xxx) group by user_id order by visits desc limit 5 Now I need to check in this resultset if UserA is present and select that store only if he's present. For example if he has visited a store 5 times - but if there are 5 or more people who have visited that store more than 5 times - then that store should not be selected. So I'm kind of lost here. Thanks for your help

    Read the article

  • SQL Server CE rollback does not undo delete.

    - by INTPnerd
    I am using SQL Server CE 3.5 and C# with the .NET Compact Framework 3.5. In my code I am inserting a row, then starting a transaction, then deleting that row from a table, and then doing a rollback on that transaction. But this does not undo the deletion. Why not? Here is my code: SqlCeConnection conn = ConnectionSingleton.Instance; conn.Open(); UsersTable table = new UsersTable(); table.DeleteAll(); MessageBox.Show("user count in beginning after delete: " + table.CountAll()); table.Insert( new User(){Id = 0, IsManager = true, Pwd = "1234", Username = "Me"}); MessageBox.Show("user count after insert: " + table.CountAll()); SqlCeTransaction transaction = conn.BeginTransaction(); table.DeleteAll(); transaction.Rollback(); transaction.Dispose(); MessageBox.Show("user count after rollback delete all: " + table.CountAll()); The messages indicate that everything works as expected until the very end where the table has a count of 0 indicating the rollback did not undo the deletion.

    Read the article

  • SQL Structure of DB table with different types of columns

    - by Dmitry Dvornikov
    I have a problem with the optimization of the structure of the database. I'll try to explain it exactly. I create a project, where we can add different values??, but this values must have different types of the columns in the database (eg, int, double , varchar). What is the best way to store the different types of values ??in the database. In the project I'm using Propel 1.6. The point is availability to add value with 'int', 'varchar' and other columns types, to search the table was efficient. In total, I have two ideas. The first is to create a table of "value", which will have columns: "id ", "value_int", "value_double", "value_varchar", etc - with the corresponding column types. Depending on the type of values??, records will be saved with the value in the appropriate column (the rest will be NULL). The second solution is to create separate tables such as "value_int", "value_varchar" etc. There would be columns: "id", "value", which correspond to the relevant types of "value" (ie, such as int, varchar, etc). I must admit that I do not believe any of the above solutions, originally I was thinking about one table "value", where the column would be a "text" type - but this solution would probably be even worse. I would like to know your opinion on this topic, maybe something else would be better. Thanks in advance. EDIT: For example : We have three tables: USER: [table of users] * id * name FIELD: [table of profile fields - where the column 'type' is the type of field, eg int or varchar) * id * type * name VALUE : * id * User_id - ( FK user.id ) * Field_id - ( FK field.id ) * value So we have in each row an user in USER table, and the profile is stored in the VALUE table. Bit each profile field may have a different type (column 'type' in the FIELD table), and based on that I would want this value to add to the appropriate column of the appropriate type.

    Read the article

  • Employee Info Starter Kit - Visual Studio 2010 and .NET 4.0 Version (4.0.0) Available

    - by joycsharp
    Employee Info Starter Kit is a ASP.NET based web application, which includes very simple user requirements, where we can create, read, update and delete (crud) the employee info of a company. Based on just a database table, it explores and solves all major problems in web development architectural space.  This open source starter kit extensively uses major features available in latest Visual Studio, ASP.NET and Sql Server to make robust, scalable, secured and maintanable web applications quickly and easily. Since it's first release, this starter kit achieved a huge popularity in web developer community and includes 1,40,000+ download from project web site. Visual Studio 2010 and .NET 4.0 came up with lots of exciting features to make software developers life easier.  A new version (v4.0.0) of Employee Info Starter Kit is now available in both MSDN Code Gallery and CodePlex. Chckout the latest version of this starter kit to enjoy cool features available in Visual Studio 2010 and .NET 4.0. [ Release Notes ] Architectural Overview Simple 2 layer architecture (user interface and data access layer) with 1 optional cache layer ASP.NET Web Form based user interface Custom Entity Data Container implemented (with primitive C# types for data fields) Active Record Design Pattern based Data Access Layer, implemented in C# and Entity Framework 4.0 Sql Server Stored Procedure to perform actual CRUD operation Standard infrastructure (architecture, helper utility) for automated integration (bottom up manner) and unit testing Technology UtilizedProgramming Languages/Scripts Browser side: JavaScript Web server side: C# 4.0 Database server side: T-SQL .NET Framework Components .NET 4.0 Entity Framework .NET 4.0 Optional/Named Parameters .NET 4.0 Tuple .NET 3.0+ Extension Method .NET 3.0+ Lambda Expressions .NET 3.0+ Aanonymous Type .NET 3.0+ Query Expressions .NET 3.0+ Automatically Implemented Properties .NET 3.0+ LINQ .NET 2.0 + Partial Classes .NET 2.0 + Generic Type .NET 2.0 + Nullable Type   ASP.NET 3.5+ List View (TBD) ASP.NET 3.5+ Data Pager (TBD) ASP.NET 2.0+ Grid View ASP.NET 2.0+ Form View ASP.NET 2.0+ Skin ASP.NET 2.0+ Theme ASP.NET 2.0+ Master Page ASP.NET 2.0+ Object Data Source ASP.NET 1.0+ Role Based Security Visual Studio Features Visual Studio 2010 CodedUI Test Visual Studio 2010 Layer Diagram Visual Studio 2010 Sequence Diagram Visual Studio 2010 Directed Graph Visual Studio 2005+ Database Unit Test Visual Studio 2005+ Unit Test Visual Studio 2005+ Web Test Visual Studio 2005+ Load Test Sql Server Features Sql Server 2005 Stored Procedure Sql Server 2005 Xml type Sql Server 2005 Paging support

    Read the article

  • Uninstalling Reporting Server 2008 on Windows Server 2008

    - by Piotr Rodak
    Ha. I had quite disputable pleasure of installing and reinstalling and reinstalling and reinstalling – I think about 5 times before it worked – Reporting Server 2008 on Windows Server with the same year number in name. During my struggle I came across an error which seems to be not quite unfamiliar to some more unfortunate developers and admins who happen to uninstall SSRS 2008 from the server. I had the SSRS 2008 installed as named instance, SQL2008. I wanted to uninstall the server and install it to default instance. And this is when it bit me – not the first time and not the last that day . The setup complained that it couldn’t access a DLL: Error message: TITLE: Microsoft SQL Server 2008 Setup ------------------------------ The following error has occurred: Access to the path 'C:\Windows\SysWOW64\perf-ReportServer$SQL2008-rsctr.dll' is denied. For help, click: http://go.microsoft.com/fwlink?LinkID=20476&ProdName=Microsoft+SQL+Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=10.0.1600.22&EvtType=0x60797DC7%25400x84E8D3C0 ------------------------------ BUTTONS: OK This is a screenshot that shows the above error: This issue seems to have a bit of literature dedicated to it and even seemingly a KB article http://support.microsoft.com/kb/956173 and a similar Connect item: http://connect.microsoft.com/SQLServer/feedback/details/363653/error-messages-when-upgrading-from-sql-2008-rc0-to-rtm The article describes issue as following: When you try to uninstall Microsoft SQL Server 2008 Reporting Services from the server, you may receive the following error message: An error has occurred: Access to the path 'Drive_Letter:\WINDOWS\system32\perf-ReportServer-rsctr.dll' is denied. Note Drive_Letter refers to the disc drive into which the SQL Server installation media is inserted. In my case, the Note was not true; the error pointed to a dll that was located in Windows folder on C:\, not where the installation media were. Despite this difference I tried to identify any processes that might be keeping lock on the dll. I downloaded Sysinternals process explorer and ran it to find any processes I could stop. Unfortunately, there was no such process. I tried to rerun the installation, but it failed at the same step. Eventually I decided to remove the dll before the setup was executed. I changed name of the dll to be able to restore it in case of some issues. Interestingly, Windows let me do it, which means that indeed, it was not locked by any process. I ran the setup and this time it uninstalled the instance without any problems:   To summarize my experience I should say – be very careful, don’t leave any leftovers after uninstallation – remove/rename any folders that are left after setup has finished. For some reason, setup doesn’t remove folders and certain files. Installation on Windows Server 2008 requires more attention than on Windows 2003 because of the changed security model, some actions can be executed only by administrator in elevated execution mode. In general, you have to get used to UAC and a bit different experience than with Windows Server 2003. Technorati Tags: SQL Server 2008,Windows Server 2008,SRS,Reporting Services

    Read the article

  • An Open Letter from Lyle Ekdahl, Group Vice President and General Manager, Oracle's JD Edwards

    - by Brian Dayton
    From Lyle Ekdahl, Group Vice President and General Manager, Oracle's JD Edwards As you may have heard, we recently announced some changes to the way Oracle will offer licensing of technology products with JD Edwards EnterpriseOne. Specifically, we have withdrawn from new sales the product known as JD Edwards EnterpriseOne Technology Foundation ("Blue Stack"). Our motivation for this change is simply to streamline licensing for our customers. Going forward, customers will license Oracle products from Oracle and IBM products from IBM. Customers who are currently licensed for Technology Foundation will continue to receive support--unchanged--through September 30, 2016. This announcement affects how customers license these IBM products; it does not affect Oracle's certification roadmap for IBM products with JD Edwards EnterpriseOne. Customers who are currently running their JD Edwards EnterpriseOne infrastructure using IBM platform components can continue to do so regardless of whether they license these components via Technology Foundation or directly from IBM. New customers choosing to run JD Edwards EnterpriseOne on IBM technology should license JD Edwards EnterpriseOne Core Tools from Oracle while licensing Infrastructure and any licenses of IBM products from IBM. For more information about this announcement, customers should refer to My Oracle Support article 1232453.1 Questions included in the "Frequently Asked Questions" document on My Oracle Support: Is Oracle dropping support for IBM DB2 and IBM WebSphere with JD Edwards EnterpriseOne? No. This announcement affects how customers license these IBM products; it does not affect Oracle's certification roadmap for these products. The JD Edwards EnterpriseOne matrix of supported databases, web servers, and portals remains unchanged, including planned support for IBM DB2, IBM WebSphere Application Server, and IBM WebSphere Portal. Customers who are currently running their JD Edwards EnterpriseOne infrastructure using IBM platform components can continue to do so regardless of whether they license these components via Technology Foundation or directly from IBM. As always, the timing and versions of such third-party certifications remain at Oracle's discretion. Does this announcement mean that Oracle is withdrawing support for JD Edwards EnterpriseOne on the IBM i platform? Absolutely not. JD Edwards EnterpriseOne support on the IBM i platform remains unchanged. This announcement simply states that customers will acquire Oracle products from Oracle and IBM products from IBM. In fact, as evidenced by the recent "IBM i Solution Edition for JD Edwards" offering, IBM and the JD Edwards product teams continue to innovate and offer attractive, cost-competitive solutions to the ERP marketplace. For more information about this offering see: http://www-03.ibm.com/systems/i/advantages/oracle/. I hope this clarifies any concerns. Let me know if you have any additional questions or concerns. -Lyle

    Read the article

  • Manchester UG Presentation Video

    In July I was invited to speak at the UK SQL Server UG event in Manchester.  I spoke about Excel being a good data mining client.  I was a little rushed at the end as Chris Testa-ONeill told me I had only 5 minutes to go when I had only been talking for 10 minutes.  Apparently I have a reputation for running over my time allocation.  At the event we also had a product demo from SQL Sentry around their BI monitoring dashboard solution.  This includes SSIS but the main thrust was SSAS Then came Chris with a look at Analysis Services.  If you have never heard Chris talk then take the opportunity now, he is a top class presenter and I am often found sat at the back of his classes. Here is the video link

    Read the article

  • Change the User Interface Language in Ubuntu

    - by Matthew Guay
    Would you like to use your Ubuntu computer in another language?  Here’s how you can easily change your interface language in Ubuntu. Ubuntu’s default install only includes a couple languages, but it makes it easy to find and add a new interface language to your computer.  To get started, open the System menu, select Administration, and then click Language Support. Ubuntu may ask if you want to update or add components to your current default language when you first open the dialog.  Click Install to go ahead and install the additional components, or you can click Remind Me Later to wait as these will be installed automatically when you add a new language. Now we’re ready to find and add an interface language to Ubuntu.  Click Install / Remove Languages to add the language you want. Find the language you want in the list, and click the check box to install it.  Ubuntu will show you all the components it will install for the language; this often includes spellchecking files for OpenOffice as well.  Once you’ve made your selection, click Apply Changes to install your new language.  Make sure you’re connected to the internet, as Ubuntu will have to download the additional components you’ve selected. Enter your system password when prompted, and then Ubuntu will download the needed languages files and install them.   Back in the main Language & Text dialog, we’re now ready to set our new language as default.  Find your new language in the list, and then click and drag it to the top of the list. Notice that Thai is the first language listed, and English is the second.  This will make Thai the default language for menus and windows in this account.  The tooltip reminds us that this setting does not effect system settings like currency or date formats. To change these, select the Text Tab and pick your new language from the drop-down menu.  You can preview the changes in the bottom Example box. The changes we just made will only affect this user account; the login screen and startup will not be affected.  If you wish to change the language in the startup and login screens also, click Apply System-Wide in both dialogs.  Other user accounts will still retain their original language settings; if you wish to change them, you must do it from those accounts. Once you have your new language settings all set, you’ll need to log out of your account and log back in to see your new interface language.  When you re-login, Ubuntu may ask you if you want to update your user folders’ names to your new language.  For example, here Ubuntu is asking if we want to change our folders to their Thai equivalents.  If you wish to do so, click Update or its equivalents in your language. Now your interface will be almost completely translated into your new language.  As you can see here, applications with generic names are translated to Thai but ones with specific names like Shutter keep their original name. Even the help dialogs are translated, which makes it easy for users around to world to get started with Ubuntu.  Once again, you may notice some things that are still in English, but almost everything is translated. Adding a new interface language doesn’t add the new language to your keyboard, so you’ll still need to set that up.  Check out our article on adding languages to your keyboard to get this setup. If you wish to revert to your original language or switch to another new language, simply repeat the above steps, this time dragging your original or new language to the top instead of the one you chose previously. Conclusion Ubuntu has a large number of supported interface languages to make it user-friendly to people around the globe.  And since you can set the language for each user account, it’s easy for multi-lingual individuals to share the same computer. Or, if you’re using Windows, check out our article on how you can Change the User Interface Language in Vista or Windows 7, too! Similar Articles Productive Geek Tips Restart the Ubuntu Gnome User Interface QuicklyChange the User Interface Language in Vista or Windows 7Create a Samba User on UbuntuInstall Samba Server on UbuntuSee Which Groups Your Linux User Belongs To TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro FetchMp3 Can Download Videos & Convert Them to Mp3 Use Flixtime To Create Video Slideshows Creating a Password Reset Disk in Windows Bypass Waiting Time On Customer Service Calls With Lucyphone MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation" Enable or Disable the Task Manager Using TaskMgrED

    Read the article

  • Frederick .NET User Group April 2010 Meeting

    - by John Blumenauer
    FredNUG is pleased to announce that we have an excellent speaker lined up for April.  On April 20th, we’ll start with pizza and social networking at 6:30 PM.  Then, starting at 7 PM, Dane Morgridge will present “Getting Started with Entity Framework 4” The scheduled agenda is:   6:30 PM - 7:00 PM - Pizza/Social Networking/Announcements 7:00 PM - 8:30 PM - Main Topic: Getting Started with Entity Framework 4 with Dane Morgridge  Main Topic Description:  Getting Started with Entity Framework 4 With .Net 3.5 Microsoft release Linq to Sql and with .Net 3.5 SP1 came the Entity Framework, both powerful ORM tools leveraging Linq technology.   Entity Framework v1, while usable, was definitely lacking some important features and the Entity Framework team delivered with version 4 coming with Visual Studio 2010.  In this session we will look at Entity Framework 4 from the ground level and you will get a solid understanding of it basic principles.  We will also go through all of the new features in Entity Framework 4 and see how far it’s come since the initial release.  If you’ve never taken a look at Entity Framework, now is the time as version 4 is the real deal. Speaker Bio: Dane Morgridge has been a developer for 9+ years and has worked with .Net & C# since the first public beta. His current passions are Entity Framework, WPF, WCF, Silverlight and LINQ. He works mostly with C#, but is also a big fan of whatever new technology he happens to come across. In addition to software development, he is the host of the Community Megaphone Podcast and also enjoys dabbling in graphic design, video special effects and hockey. When not with his family he is usually learning some new technology or working on some side projects. He is currently working as the Development Manager & Architect at Roska Direct in Montgomeryville, PA.  He can be reached through is blog http://geekswithblogs.net/danemorgridge or on Twitter @danemorgridge.  8:30 PM - 8:45 PM – RAFFLE! Please join us and get involved in our .NET developers community!

    Read the article

  • Hekaton – SQL Server’s in-memory database engine

    - by Christian
    Microsoft have just gone public at the PASS Summit in Seattle about a new SQL Server engine that they’re working on which is optimized for high-memory servers – an in-memory OLTP database engine which is built-in to SQL Server rather than a separate entity.  This means that you can move just the performance critical parts of your database to Hekaton. The new engine really pushes the performance boundaries by eliminating as many instructions as possible: Main memory optimized tables which are decoupled from on-disk structures; Everything is lock and latch free; More work is pushed to compile time so your T-SQL code is compiled natively into low-level code. We’re already working with a customer on an early adoption program so expect to hear from us on what we learn about implementing it!   Christian Bolton - MCA, MCM, MVP Technical Director http://coeo.com - SQL Server Consulting & Managed Services

    Read the article

  • Data Quality and Master Data Management Resources

    - by Dejan Sarka
    Many companies or organizations do regular data cleansing. When you cleanse the data, the data quality goes up to some higher level. The data quality level is determined by the amount of work invested in the cleansing. As time passes, the data quality deteriorates, and you need to repeat the cleansing process. If you spend an equal amount of effort as you did with the previous cleansing, you can expect the same level of data quality as you had after the previous cleansing. And then the data quality deteriorates over time again, and the cleansing process starts over and over again. The idea of Data Quality Services is to mitigate the cleansing process. While the amount of time you need to spend on cleansing decreases, you will achieve higher and higher levels of data quality. While cleansing, you learn what types of errors to expect, discover error patterns, find domains of correct values, etc. You don’t throw away this knowledge. You store it and use it to find and correct the same issues automatically during your next cleansing process. The following figure shows this graphically. The idea of master data management, which you can perform with Master Data Services (MDS), is to prevent data quality from deteriorating. Once you reach a particular quality level, the MDS application—together with the defined policies, people, and master data management processes—allow you to maintain this level permanently. This idea is shown in the following picture. OK, now you know what DQS and MDS are about. You can imagine the importance on maintaining the data quality. Here are some resources that help you preparing and executing the data quality (DQ) and master data management (MDM) activities. Books Dejan Sarka and Davide Mauri: Data Quality and Master Data Management with Microsoft SQL Server 2008 R2 – a general introduction to MDM, MDS, and data profiling. Matching explained in depth. Dejan Sarka, Matija Lah and Grega Jerkic: MCTS Self-Paced Training Kit (Exam 70-463): Building Data Warehouses with Microsoft SQL Server 2012 – I wrote quite a few chapters about DQ and MDM, and introduced also SQL Server 2012 DQS. Thomas Redman: Data Quality: The Field Guide – you should start with this book. Thomas Redman is the father of DQ and MDM. Tyler Graham: Microsoft SQL Server 2012 Master Data Services – MDS in depth from a product team mate. Arkady Maydanchik: Data Quality Assessment – data profiling in depth. Tamraparni Dasu, Theodore Johnson: Exploratory Data Mining and Data Cleaning – advanced data profiling with data mining. Forthcoming presentations I am presenting a DQS and MDM seminar at PASS SQL Rally Amsterdam 2013: Wednesday, November 6th, 2013: Enterprise Information Management with SQL Server 2012 – a good kick start to your first DQ and / or MDM project. Courses Data Quality and Master Data Management with SQL Server 2012 – I wrote a 2-day course for SolidQ. If you are interested in this course, which I could also deliver in a shorter seminar way, you can contact your closes SolidQ subsidiary, or, of course, me directly on addresses [email protected] or [email protected]. This course could also complement the existing courseware portfolio of training providers, which are welcome to contact me as well. Start improving the quality of your data now!

    Read the article

  • Making Money from your SQL Server Blog

    - by Bill Graziano
    My SQL Server blog reading list is around one hundred blogs.  Many people are writing great content and generating lots of page views.  I see some of them running Google AdSense and trying to make a little money off their traffic.  If you want to earn some some extra money from what you’ve written there are a couple of options.  And one new option that I’m announcing here. Background Internet advertising is sold based on a few different pricing schemes.  Flat Fee.  You offer either all your impressions (page views) or some percentage of your impressions in exchange for a flat monthly fee.  CPM or cost per thousand impressions.  If the quoted price is $2 CPM you’ll get $2 for every 1,000 times the ad is displayed.  While you might think the “M” means millions, the “M” in CPM is the roman numeral for 1,000. CPC or cost per click.  This is also called PPC or pay per click.  In this method you get paid based on how many clicks there are on the ad.  CPA or cost per action.  In this method you get paid based on an action that occurs on the advertisers site after they click on the ad.  This is typically some type of sign up form.  This is how most affiliate programs work. Darren Rowse at ProBlogger has been writing about blogging and making money off blogs for years.  He has a good introduction to making money on your blog in his “Making Money” section.  If you’re interested in learning more he has a post up titled How to Make More Money From Your Blog in the New Year that links to many of his best posts on the subject. Google AdSense This is the most common method for people earning money from their blogging.  It’s easy to setup and administer.  You tell AdSense what size ads you’d like to run and it gives you a little piece of JavaScript to put on your site.  AdSense quickly learns the topics you write about and displays ads that are appropriate for your site.  I typically see ads for hosting, SQL Server tools and developer tools running in AdSense slots.  AdSense pays on a CPC model.  If you translate that back to CPM pricing you’ll see rates from $0.50 to $1.00 CPM. Amazon While you might not make much money writing books it’s now possible to make even less helping Amazon sell them.  You can sign up for an Amazon affiliate program.  Each time you send Amazon a link and someone buys the book you get a cut of that sale.  This is the CPA model from above.  Amazon can help you build some pretty nice “stores”.  Here’s the SQL Server bookstore I built for SQLTeam.com.  If you’re just putting in a page with books like I’ve done on SQLTeam you should keep your expectations low.  If you’re writing book reviews of suggesting books on your blog it really does make sense to setup an Amazon affiliate link.  People are much more likely to buy a book based on a review from a trusted source.  I always try to buy through a referral link if there is one. Amazon pays about 4% of the price as a referral fee.  You also get credit for anything else they buy while on the site.  I recently had someone buy an iPod nano with their SQL Server book making me an extra $5.60 richer!  Estimating how much you can make is difficult though.  How much attention you draw to the links and book reviews can dramatically affect the earnings. Private Ad Sales This is the hardest but potentially most lucrative option.  You sell advertising directly to companies that want to sell things to your readers.  Typically this would be SQL Server tool vendors, hosting companies or anyone else that wants to make money off database administrators.  This is also the most difficult to do.  You’ll need the contacts at the companies and enough page views to make it worth their while.  You’ll also need software to track the page views and clicks, geo-target your ads and smooth out the impressions.  Your earnings are based on whatever you can negotiate with the companies. SQL Server Ad Network For the last couple of years I’ve run any extra ads that I sold on the SQLTeam Weblogs.  You can see an example of that on Mladen’s blog.  The ad in the upper right corner is one that I’m running for him.  (Note: Many of the ads I’m running are geo-targeted to only appear in English speaking countries.  You may see a different set of ads outside the US, Canada and the UK.  You can also see he has a couple of Google ads on his blog.)  When I run ads on his blog I split the advertising revenue with him.  They make a little and I make a little. I recently started to expand this and sell advertising specifically to run on SQL Server-related blogs.  I’m also starting to run ads on non-SQLTeam blogs.  The only way I can sell more advertising is to have more blogs to run it on.  And that’s where you come in. I’ve created a SQL Server advertising network.  I handle all the ad sales and provide the technology to serve the ads.  I handle collections and payments back to you.  You get paid at the end of each month regardless of when (or if) the advertiser actually pays.  All you need to do is add a small piece of JavaScript to your site to display the ads. If you’re writing about SQL Server and interested in earning a little money for your site I’d like to talk to you.  You can use the Contact Us page on SQLTeam.com to reach me.  Running advertising on your blog isn’t for everyone.  If you’re concerned about what advertisers might think about certain posts then you might not be a good fit.  For the most part this isn’t an issue.  You’ll also need to have a PayPal account to receive payments.  You probably won’t get rich doing this.  But you can earn extra cash on the side for doing what you would do anyway.  I do know that people have earned enough to buy themselves a nice laptop doing this. My initial target is blogs with more than 10,000 page views per month.  I expect to pay two to three times what Google pays.  If you have less than 10,000 page views per month but are still interested I’d still like to hear from you.  I may not be able to sign up smaller blogs right away but we’ll get the process started.  If you’re unsure about your traffic Google Analytics is a free tool that provides great reporting on traffic, popular posts and how people find your blog.  If you have any questions or are just curious drop me a line and I’ll try to answer your questions.

    Read the article

  • User Lockout & WLST

    - by Bala Kothandaraman
    WebLogic server provides an option to lockout users to protect accounts password guessing attack. It is implemented with a realm-wide Lockout Manager. This feature can be used with custom authentication provider also. But if you implement your own authentication provider and wish to implement your own lockout manager that is possible too. If your domain is configured to use the user lockout manager the following WLST script will help you to: - check whether a user is locked using a WLST script - find out the number of locked users in the realm #Define constants url='t3://localhost:7001' username='weblogic' password='weblogic' checkuser='test-deployer' #Connect connect(username,password,url) #Get Lockout Manager Runtime serverRuntime() dr = cmo.getServerSecurityRuntime().getDefaultRealmRuntime() ulmr = dr.getUserLockoutManagerRuntime() print '-------------------------------------------' #Check whether a user is locked if (ulmr.isLockedOut(checkuser) == 0): islocked = 'NOT locked' else: islocked = 'locked' print 'User ' + checkuser + ' is ' + islocked #Print number of locked users print 'No. of locked user - ', Integer(ulmr.getUserLockoutTotalCount()) print '-------------------------------------------' print '' #Disconnect & Exit disconnect() exit()

    Read the article

  • PASS Summit book launch and meet the authors - Professional SQL Server 2012 Internals & Troubleshooting

    - by Christian
    I’m very pleased to announce that we’ll be officially launching our new book, Professional SQL Server 2012 Internals and Troubleshooting at the PASS Summit in Seattle tomorrow. In partnership with our great friends at SQL Sentry we’ll have most of the authors at the SQL Sentry exhibitors stand from 12:30 on Thursday 8th November for a book signing event which will give you a rare opportunity to meet with the authors and contributors, many of which have flown in from around the world. SQL Sentry also have lots and lots of copies to give away for free so be sure to drop by their stand and ask about it! If you really can’t wait or run the risk of not getting a copy then the PASS bookstore has a few copies for sale but don’t expect them to be there for long! You can also order it from your favourite online retailer: amazon.com: http://amzn.to/U9IlPV barnesandnoble.com: http://bitly.com/Ux1gog amazon.co.uk: http://bitly.com/WBJ18l I’ll be writing a follow-up post very soon explaining why I think you should buy this book so look out for it!   Christian Bolton - MCA, MCM, MVP Technical Director http://coeo.com - SQL Server Consulting & Managed Services

    Read the article

< Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >