Monthly Archives

Articles indexed in June 2012

Page 205/464 | < Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >

  • TechEd 2012: Office, SharePoint And More Office

    - by Tim Murphy
    I haven’t spent any time looking at Office 365 up to this point.  I met Donovan Follette on the flight down from Chicago.  I also got to spend some time discussing the product offerings with him at the TechExpo and that sealed my decision to attend this session. The main actor of his presentation is the BCS – Business Connectivity Services. He explained that while this feature has existed in on-site SharePoint it is a valuable new addition to Office 365 SharePoint Online.  If you aren’t familiar with the BCS, it allows you to leverage non-SharePoint enterprise data source from SharePoint.  The greatest benefactor is the end users who can leverage the data using a variety of Office products and more.  The one thing I haven’t shaken my skepticism of is the use of SharePoint Designer which Donovan used to create a WCF service.  It is mostly my tendency to try to create solutions that can be managed through the whole application life cycle.  It the past migrating through test environments has been near impossible with anything other than content created by SharePiont Designer. There is a lot of end user power here.  The biggest consideration I think you need to examine when reaching from you enterprise LOB data stores out to an online service and back is that you are going to take a performance hit.  This means that you have to be very aware of how you configure these integrated self serve solutions.  As a rule make sure you are using the right tool for the right situation. I appreciated that he showed both no code and code solutions for the consumer of the LOB data.  I came out of this session much better informed about the possibilities around this product. del.icio.us Tags: Office 365,SharePoint Online,TechEd,TechEd 2012

    Read the article

  • TechEd 2012: Day 3 &ndash; Build Me A Solution

    - by Tim Murphy
    While digesting my lunch it was time to digest some TFS Build information. While much of my time is spent wearing my developer’s hat I am still a jack of all trades and automated builds are an important aspect of any project.  Because of this I was looking forward to finding out what new features are available in the latest release of Team Foundation Server. The first feature that caught my attention is the TFS Admin Client.  After being used to dealing with NAnt in the past it is nice to see a build a configuration GUI that is so flexible and well thought out.  The bonus is that it the tools that are incorporated in Visual Studio 2012 are just as feature rich.  Life is good. Since automated builds are the hub of your development process in a continuous integration shop I was really interested in the process related options. The biggest value add that I noticed was merge gated check-ins.  Merge or batch gated check-ins are an interesting concept.  If the build breaks with all the changes then TFS will run separate builds for each of the check-ins.  This ability to identify the actual offending check-in can save a lot of time and gray hair. The safari of TFS Build that was this session was packed with attractions.  How do you set it up builds, what are the different flavors of builds, how does the system report how the build went?  I would suggest anyone who is responsible for build automation spend some serious time with TFS 2012 and VS2012. del.icio.us Tags: Team Foundation Server 2012,TFS,Build,TechEd,TechEd 2012,Visual Studio 2012

    Read the article

  • Using Table-Valued Parameters in SQL Server

    - by Jesse
    I work with stored procedures in SQL Server pretty frequently and have often found myself with a need to pass in a list of values at run-time. Quite often this list contains a set of ids on which the stored procedure needs to operate the size and contents of which are not known at design time. In the past I’ve taken the collection of ids (which are usually integers), converted them to a string representation where each value is separated by a comma and passed that string into a VARCHAR parameter of a stored procedure. The body of the stored procedure would then need to parse that string into a table variable which could be easily consumed with set-based logic within the rest of the stored procedure. This approach works pretty well but the VARCHAR variable has always felt like an un-wanted “middle man” in this scenario. Of course, I could use a BULK INSERT operation to load the list of ids into a temporary table that the stored procedure could use, but that approach seems heavy-handed in situations where the list of values is usually going to contain only a few dozen values. Fortunately SQL Server 2008 introduced the concept of table-valued parameters which effectively eliminates the need for the clumsy middle man VARCHAR parameter. Example: Customer Transaction Summary Report Let’s say we have a report that can summarize the the transactions that we’ve conducted with customers over a period of time. The report returns a pretty simple dataset containing one row per customer with some key metrics about how much business that customer has conducted over the date range for which the report is being run. Sometimes the report is run for a single customer, sometimes it’s run for all customers, and sometimes it’s run for a handful of customers (i.e. a salesman runs it for the customers that fall into his sales territory). This report can be invoked from a website on-demand, or it can be scheduled for periodic delivery to certain users via SQL Server Reporting Services. Because the report can be created from different places and the query to generate the report is complex it’s been packed into a stored procedure that accepts three parameters: @startDate – The beginning of the date range for which the report should be run. @endDate – The end of the date range for which the report should be run. @customerIds – The customer Ids for which the report should be run. Obviously, the @startDate and @endDate parameters are DATETIME variables. The @customerIds parameter, however, needs to contain a list of the identity values (primary key) from the Customers table representing the customers that were selected for this particular run of the report. In prior versions of SQL Server we might have made this parameter a VARCHAR variable, but with SQL Server 2008 we can make it into a table-valued parameter. Defining And Using The Table Type In order to use a table-valued parameter, we first need to tell SQL Server about what the table will look like. We do this by creating a user defined type. For the purposes of this stored procedure we need a very simple type to model a table variable with a single integer column. We can create a generic type called ‘IntegerListTableType’ like this: CREATE TYPE IntegerListTableType AS TABLE (Value INT NOT NULL) Once defined, we can use this new type to define the @customerIds parameter in the signature of our stored procedure. The parameter list for the stored procedure definition might look like: 1: CREATE PROCEDURE dbo.rpt_CustomerTransactionSummary 2: @starDate datetime, 3: @endDate datetime, 4: @customerIds IntegerListTableTableType READONLY   Note the ‘READONLY’ statement following the declaration of the @customerIds parameter. SQL Server requires any table-valued parameter be marked as ‘READONLY’ and no DML (INSERT/UPDATE/DELETE) statements can be performed on a table-valued parameter within the routine in which it’s used. Aside from the DML restriction, however, you can do pretty much anything with a table-valued parameter as you could with a normal TABLE variable. With the user defined type and stored procedure defined as above, we could invoke like this: 1: DECLARE @cusomterIdList IntegerListTableType 2: INSERT @customerIdList VALUES (1) 3: INSERT @customerIdList VALUES (2) 4: INSERT @customerIdList VALUES (3) 5:  6: EXEC dbo.rpt_CustomerTransationSummary 7: @startDate = '2012-05-01', 8: @endDate = '2012-06-01' 9: @customerIds = @customerIdList   Note that we can simply declare a variable of type ‘IntegerListTableType’ just like any other normal variable and insert values into it just like a TABLE variable. We could also populate the variable with a SELECT … INTO or INSERT … SELECT statement if desired. Using The Table-Valued Parameter With ADO .NET Invoking a stored procedure with a table-valued parameter from ADO .NET is as simple as building a DataTable and passing it in as the Value of a SqlParameter. Here’s some example code for how we would construct the SqlParameter for the @customerIds parameter in our stored procedure: 1: var customerIdsParameter = new SqlParameter(); 2: customerIdParameter.Direction = ParameterDirection.Input; 3: customerIdParameter.TypeName = "IntegerListTableType"; 4: customerIdParameter.Value = selectedCustomerIds.ToIntegerListDataTable("Value");   All we’re doing here is new’ing up an instance of SqlParameter, setting the pamameters direction, specifying the name of the User Defined Type that this parameter uses, and setting its value. We’re assuming here that we have an IEnumerable<int> variable called ‘selectedCustomerIds’ containing all of the customer Ids for which the report should be run. The ‘ToIntegerListDataTable’ method is an extension method of the IEnumerable<int> type that looks like this: 1: public static DataTable ToIntegerListDataTable(this IEnumerable<int> intValues, string columnName) 2: { 3: var intergerListDataTable = new DataTable(); 4: intergerListDataTable.Columns.Add(columnName); 5: foreach(var intValue in intValues) 6: { 7: var nextRow = intergerListDataTable.NewRow(); 8: nextRow[columnName] = intValue; 9: intergerListDataTable.Rows.Add(nextRow); 10: } 11:  12: return intergerListDataTable; 13: }   Since the ‘IntegerListTableType’ has a single int column called ‘Value’, we pass that in for the ‘columnName’ parameter to the extension method. The method creates a new single-columned DataTable using the provided column name then iterates over the items in the IEnumerable<int> instance adding one row for each value. We can then use this SqlParameter instance when invoking the stored procedure just like we would use any other parameter. Advanced Functionality Using passing a list of integers into a stored procedure is a very simple usage scenario for the table-valued parameters feature, but I’ve found that it covers the majority of situations where I’ve needed to pass a collection of data for use in a query at run-time. I should note that BULK INSERT feature still makes sense for passing large amounts of data to SQL Server for processing. MSDN seems to suggest that 1000 rows of data is the tipping point where the overhead of a BULK INSERT operation can pay dividends. I should also note here that table-valued parameters can be used to deal with more complex data structures than single-columned tables of integers. A User Defined Type that backs a table-valued parameter can use things like identities and computed columns. That said, using some of these more advanced features might require the use the SqlDataRecord and SqlMetaData classes instead of a simple DataTable. Erland Sommarskog has a great article on his website that describes when and how to use these classes for table-valued parameters. What About Reporting Services? Earlier in the post I referenced the fact that our example stored procedure would be called from both a web application and a SQL Server Reporting Services report. Unfortunately, using table-valued parameters from SSRS reports can be a bit tricky and warrants its own blog post which I’ll be putting together and posting sometime in the near future.

    Read the article

  • TechEd 2012 - day 3

    - by Stefan Barrett
    The content has got more useful for me as a developer, and I've now seen 2 things which I think will make a big difference: Fake in vs2012 - allows me to stub or fake out libraries making unit testing easier/possible. C++ AMP & auto - auto might get me to start using c++ again (it makes code like for each much nicer/easier to write), while AMP is something I want to play with (moves the processing onto the GPU) The food got a little better, while there was less sign of the snacks.

    Read the article

  • TFS Rant *WARNING* negative opinions are being expressed.

    - by ryanabr
    It has happened several times now where I end up installing TFS "over the shoulder" of the system admin guy whose job it will be to "own" the server when I am gone. TFS is challenging enough to stand up when doing it myself on a completely open platform, but at these locations, networks are locked down, machines are locked down, and the unexpected always seems to pop up. I personally have the tolerance for these things as a software developer, but as we are installing I have to listen to all of 'colorful' remarks being made about: "why is it like this" or "this is a piece of crap". Generally the issues center around SharePoint integration. TFS on it's own is straightforward, but the last flavor in everyone's mouth is the SharePoint piece. As a product I like SharePoint, but installation is a nightmare. In this particular case, we are going to use WSS since the customer would like this separate from their corp SharePoint 2010 installations since there dev team is really small (1 developer) and it is being used as a VSS replacement, more than a full blown ALM tool. The server where it is being installed as a Cisco Security Agent on it that seems to block 'suspicious' activity, and as far as I can tell is preventing WSS from installing properly. The most confounding thing we can find no meaningful log entries to help diagnose the issue. it didn't help matters that when we tried to contact Microsoft for support, because we mentioned TFS in the list of things that we were trying to install, that after waiting 2 hours that we got a TFS support person NOT the SharePoint person that we really needed, so after another 2 hours the SharePoint support that we did get managed to corrupt the registry sufficiently with his 'tools' that we ended up starting over from scratch the next day anyway after going home at midnight. My point to this is: The System Administrator who is going to own this, now thinks it is a piece of crap because SharePoint wouldn't install properly. Perception is everything.  Everyone today is conditioned that software installs and works in a very simple matter. When looking at the different options to install TFS with the different "modes" there is inconsistency in the information being presented which leads to choices that causes headaches and this bad perception before the product is even installed. I am highlighting this because I love TFS as a product, but I HATE installing it, and would like it to install as simply and elegantly as the product operates once it is installed.

    Read the article

  • TechEd 2012: Day 3 &ndash; Morning TFS

    - by Tim Murphy
    My morning sessions for day three were dominated by Team Foundation Server.  This has been a hot topic for our clients lately, so this topic really stuck a chord. The speaker for the first session was from Boeing.  It was nice to hear how how a company mixes both agile and waterfall project management.   The approaches that he presented were very pragmatic.  For their needs reporting is the crucial part of their decision to use TFS.  This was interesting since this is probably the last aspect that most shops would think about. The challenge of getting users to adopt TFS was brought up by the audience.  As with the other discussion point he took a very level headed stance.  The approach he was prescribing was to eat the elephant a bite at a time instead of all at once.  If you try to convert you entire shop at once the culture shock will most likely kill the effort. Another key point he reminded us of is that you need to make sure that standards and compliance are taken into account when you setup TFS.  If you don’t implement a tool and processes around it that comply with the standards bodies that govern your business you are in for a world of hurt. Ultimately the reason they chose TFS was because it was the first tool that incorporated all the ALM features that they needed. Reduced licensing cost because of all the different tools they would need to buy to complete the same tasks.  They got to this point by doing an industry evaluation.  Although TFS came out on top he said that it still has a big gap is in the Java area.  Of course in this market there are vendors helping to close that gap. The second session was on how continuous feedback in agile is a new focus in VS2012.  The problems they intended to address included cycle time and average time to repair, root cause analysis. The speakers fired features at us as if they were firing a machine gun.  I will just say that I am looking forward to digging into the product after seeing this presentation.  Beyond that I will simply list some of the key features that caught my attention. Feature – Ability to link documents into tasks as artifacts Web access portal PowerPoint storyboards Exploratory testing Request feedback (allows users to record notes, screen shots and video/audio) See you after the second half. del.icio.us Tags: TechEd,TechEd 2012,TFS,Team Foundation Server

    Read the article

  • Why I&rsquo;m *NOT* Attending MOSOConf

    - by D'Arcy Lussier
    It’s my daughter’s birthday on Saturday, she turns 10! Lot’s of party planning to do this week! What, you thought this was going to be a rant post? Hope everyone has a fantastic time at MOSOConf and congrats to the organizers for putting on another fantastic event! Btw, if you’re looking to get even more mobile and web development goodness, come check out Prairie IT Pro & Dev Con in Regina this October – we just announced our current session/speaker line-up and we’ll be adding more sessions over the summer! Click the image below to visit the event site!

    Read the article

  • The Agile Engineering Rules of Test Code

    - by Malcolm Anderson
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Lots of test code gets written, a lot of it is waste, some of it is well engineered waste.Companies hire Agile Engineering Coaches because agile engineering is easy to do wrong.Very easy.So here's a quick tool you can use for self coaching.It's what I call, "The Agile Engineering Rules of Test Code" and it's going to act as a sort of table of contents for some future posts.The Agile Engineering Rules of Test Code Malcolm Anderson   Test code is not throw away code Test code is production code   8 questions to determine the quality of your test code Does the test code have appropriate comments?Is the test code executed as part of the build?Every Time?Is the test code getting refactored?Does everyone use the same test code?Can the test code be described as “Well Maintained”?Can a bright six year old tell you why any particular test failed?Are the tests independent and infinitely repeatable?

    Read the article

  • Backpacks and Booth Paint: TechEd 2012

    - by The Un-T Guy
    Arriving in the parking lot of the Orange County Convention Center, I immediately knew I was in the right place. As far as the eye could see, the acres of asphalt were awash in backpacks, quirky (to be kind) outfits, and bad haircuts. This was the place. This was Microsoft Mecca v2012 for geeks and nerds, the Central Florida event of the year, a gathering of high tech professionals whose skills I both greatly respect and, frankly, fear a little. I was wholly and completely out of element, a dork in a vast sea of geek jumbo. It like was wearing dockers and a golf shirt walking into a RenFaire, but one with really crappy costumes and no turkey legs...save those attached to some of the attendees. Of course the corporate whores...errrr, vendors were in place, ready to parlay the convention's fre-nerd-ic energy into millions of dollars by convincing the big-brained and under-sexed in the crowd (i.e., virtually all of them...present company excluded, of course) that their product or service was the only thing standing between them and professional success, industry fame, and clear skin. "With KramTech 2012," they seemed to scream, "you will be THE ROCK STAR of your company's IT department!" As car shows and tattoo parlors learned long ago, Tech companies seem to believe that the best way to attract the attention of this crowd is through the hint of the promise of sex. They recruit and deploy an army of "sales reps" whose primary qualifications appear to be long hair, short skirts, high heels, and a vagina. Unlike their distant cousins in the car and body art industries, however, this sub-species of booth paint (semi-gloss decoration that adds nothing to the substance of the product) seems torn between committing to being all-out sex objects and recognition that they are in the presence of intelligent, discerning people. People who are smart enough to know exactly what these vendors are doing. Also unlike their distant car show and tattoo shop cousins, these young women (what…are there no gay tech professionals who could use some eye candy?) seem to realize that while IT remains a male-dominated field, there are ever-increasing numbers of intelligent, capable, strong professional women – women who’ve battled to make it in this field through hard work and work performance rather than a hard body and performing after work. This is not to say that all of the young female sales reps are there only because of their physical attributes. Many are competent, intelligent, and driven -- not to mention attractive. They're working hard on the front lines of delivering the next generation of technology. The distinction is pretty clear, however, between these young professionals and the booth paint. The former enthusiastically deliver credible information about the products they’re hawking. The latter are positioned in the aisles, uncomfortably avoiding eye contact as they struggle to operate the badge readers. Surprisingly, not all of the women in attendance seemed to object to the objectification of their younger sisters. One IT professional woman who came of age in the industry (mostly in IT marketing) said, “I have no problem with it. I was a ‘booth babe’ for years and it doesn’t bother me at all.” Others, however, weren’t quite so gracious. One woman I spoke with, an IT manager from Cheyenne, Wyoming, said it was demeaning and frankly, as more and more women grow into IT management positions, not a great marketing idea. “Using these young women is, to me, no different than vendors giving out t-shirts to attract attention. It’s sad because it’s still hard for a woman to be respected in the IT field and this just perpetuates the outdated notion that IT is a male-dominated field.” She went on to say that decisions by vendors to employ these young women in this “inappropriate way” could impact her purchasing decisions. “I might be swayed toward a vendor who has women on staff who are intelligent and dynamic rather than the vendors who use the ‘decoration’ girls.” So in many ways, the IT industry is no different than most other industries as it struggles to maximize performance by finding and developing talent – all of the talent, not just the 50% with a penis. Women in IT, like their brethren, struggle to find their niche in the field, to grow professionally, and reach for the brass ring, struggling to overcome obstacles as they climb the mountain of professional success in a never-ending cycle of economic uncertainty. But as (generally) well-educated and highly-trained professionals, they are probably better positioned than those in many other industries. Beside, they’ve got one other advantage over their non-IT counterparts as they attempt their ascent to the summit: They’ve already got the backpacks.

    Read the article

  • O'Reilly deal of the Week on Early Release Books to 19/June/2012 23:39 PT

    - by TATWORTH
    O'Reilly are offering a 50% off deal on early release e-books at http://http://shop.oreilly.com/category/early-release.do?code=WKEARE"With Early Release ebooks, you get entire books in their earliest form — the author's raw and unedited content as he or she writes — so you can take advantage of these technologies long before the official release of these titles. You'll also receive updates when significant changes are made, as well as the final multiple-format ebook bundle."These are an excellent deal!

    Read the article

  • Why does Django's dev server use port 8000 by default?

    - by kojiro
    (My question isn't really about Django. It's about alternative http ports. I just happen to know Django is a relatively famous application that uses 8000 by default, so it's illustrative.) I have a dev server in the wild that we occasionally need to run multiple httpd services on on different ports. When I needed to stand a third service up and we were already using ports 80 and 8080, I discovered our security team has locked port 8000 access from the Internet. I recognize that port 80 is the standard http port, and 8080 is commonly http_alt, but I'd like to make the case to our security team to open 8000 as well. In order to make that case, I hope the answer to this question can provide me with a reasonable argument for using port 8000 over 8080 in some case. Or was it just a random choice with no meaning?

    Read the article

  • How to retrieve names of all private MSMQ queues - efficiently?

    - by Damian Powell
    How can I retrieve the names of all of the private MSMQ queues on the local machine, without using System.Messaging.MessageQueue.GetPrivateQueuesByMachine(".")? I'm using PowerShell so any solution using COM, WMI, or .NET is acceptable, although the latter is preferable. Note that this StackOverflow question has a solution that returns all of the queue objects. I don't want the objects (it's too slow and a little flakey when there are lots of queues), I just want their names.

    Read the article

  • Apache and mod_authn_dbd DBDPrepareSQL error

    - by David Brown
    I am running Apache 2.2.17 on Suse Linux 11. I have installed the mod_authn_dbd module to allow authentication using a MySQL database. Simple digest authentication works absolutely fine using the following directive: AuthDBDUserRealmQuery "SELECT loginPassword FROM login WHERE loginUser = %s and loginRealm = %s" However, I would like to both generalize this statement and prepare it for performance and security reasons. Thus, I used the following directives instead: DBDPrepareSQL "SELECT loginPassword FROM login WHERE loginUser = %s and loginRealm = %s" digestLogin AuthDBDUserRealmQuery digestLogin These directives generate the following errors: [...] [error] (20014)Internal error: DBD: failed to prepare SQL statements: [...] [error] (20014)Internal error: DBD: failed to initialise Why does my actual SQL statement work when used directly, but not when I try to prepare it? (Note I have tried using '?' and '%%' in place of '%', to no effect.)

    Read the article

  • Tracking changes to firewall configs?

    - by jmreicha
    Myself and one other indivdual will be taking over some of the daily firewall management duties soon and I'm looking for a way to track changes on our firewall configurations for auditing purposes and need some ideas on a good way to track changes the changes that are made. I don't have a lot of specific criteria but here are some of the basic things I would like to be able to do: Access to previous revisions of firewall configs Access to changes made and by whom When specific changes were made I'm wondering if some sort of revision control software would work here as a way to track the the changes? Or if some other approach would work better for managing the change control in this situation. I'm open to any and all suggestions at this point. EDIT: We are using a Checkpoint pair, one passive one active configuration. I will update again with specific model numbers when I get a chance.

    Read the article

  • Debian + ProFTPD + LDAP Incorrect Password Issue

    - by Tristan Hall
    I have the LDAP configuration configured for ProFTPD and I have modified the modules.conf file to include the LDAP module. However, every time I login with FileZilla I get 530 Login Incorrect. It does this for all users except those whose passwords are defined locally as well as in LDAP. The exact same setup works fine on my CentOS server and I've already tried re-installing it after purging the configuration files.

    Read the article

  • Office 2007 Installer Not Logging When Run During Startup Script

    - by bshacklett
    I'm using a startup script to deploy Microsoft Office 2007 Standard. It's not working quite right and I'm trying to find logging information for it. Unfortunately, no log is being generated by setup. I've tried configuring the log at %temp%\, C:\, %systemroot%\temp and leaving it at defaults, all to no avail. Nothing is showing up in the event logs, either. Is there anywhere else I can go to look for information on what's going on with the installer?

    Read the article

  • Using standard e-mail address as user system wide name

    - by PeterMmm
    I'm going to re-build a very old Lotus Notes infrastructure coming from 4.x towards 8.5. I'm trying to setup Domino so that all user names should be of a single string or the internet e-mail address. For example the user "John Smith/ACME" should be in the whole system jsmith or [email protected] . I still get jsmith/ACME all around. Where it is most annoying is in the NAB when creating a new message. Is there a way to get all addresses in uniform standard e-mail adress format at least in mail ? The mixup in the destination like "John Smith/ACME, [email protected]" confused the users.

    Read the article

  • SQL Server Reporting Services - website blank, builder works

    - by Keith
    We have a few reports in SQL Server Reporting Services. For some reason when we run the report from the website, it doesn't return any data. When I run the same report from the Report Builder, it returns data. I looked in the logs and the only errors I could find is: ReportingServicesService!library!8!6/15/2012-08:12:33:: i INFO: Current DB Version Unknown, Instance Version C.0.8.54. ReportingServicesService!library!8!6/15/2012-08:12:33:: e ERROR: Throwing Microsoft.ReportingServices.Diagnostics.Utilities.InvalidReportServerDatabaseException: The version of the report server database is either in a format that is not valid, or it cannot be read. The found version is 'Unknown'. The expected version is 'C.0.8.54'. To continue, update the version of the report server database and verify access rights., ;Info: Microsoft.ReportingServices.Diagnostics.Utilities.InvalidReportServerDatabaseException: The version of the report server database is either in a format that is not valid, or it cannot be read. The found version is 'Unknown'. The expected version is 'C.0.8.54'. To continue, update the version of the report server database and verify access rights. ReportingServicesService!library!8!6/15/2012-08:12:33:: e ERROR: Exception caught while starting service. Error: Microsoft.ReportingServices.Diagnostics.Utilities.InvalidReportServerDatabaseException: The version of the report server database is either in a format that is not valid, or it cannot be read. The found version is 'Unknown'. The expected version is 'C.0.8.54'. To continue, update the version of the report server database and verify access rights. I'm not really sure why it would be a different version. It's all SQL Server 2008 R2 and I haven't made any changes to it since it's been running.

    Read the article

  • Options to efficiently synchronize 1 million files with remote servers?

    - by Zilvinas
    At a company I work for we have such a thing called "playlists" which are small files ~100-300 bytes each. There's about a million of them. About 100,000 of them get changed every hour. These playlists need to be uploaded to 10 other remote servers on different continents every hour and it needs to happen quick in under 2 mins ideally. It's very important that files that are deleted on the master are also deleted on all the replicas. We currently use Linux for our infrastructure. I was thinking about trying rsync with the -W option to copy whole files without comparing contents. I haven't tried it yet but maybe people who have more experience with rsync could tell me if it's a viable option? What other options are worth considering?

    Read the article

  • Varnish, hide port number

    - by George Reith
    My set up is as follows: OS: CentOS 6.2 running on an OpenVZ virtual machine. Web server: Nginx listening on port 8080 Reverse proxy: Varnish listening on port 80 The problem is that Varnish redirects my requests to port 8080 and this appears in the address bar like so http://mysite.com:8080/directory/, causing relative links on the site to include the port number (8080) in the request and thus bypassing Varnish. The site is powered by WordPress. How do I allow Varnish to use Nginx as the backend on port 8080 without appending the port number to the address? Edit: Varnish is set up like so: I have told the Varnish daemon to listen to port 80 by default. VARNISH_VCL_CONF=/etc/varnish/default.vcl # # # Default address and port to bind to # # Blank address means all IPv4 and IPv6 interfaces, otherwise specify # # a host name, an IPv4 dotted quad, or an IPv6 address in brackets. # VARNISH_LISTEN_ADDRESS= VARNISH_LISTEN_PORT=80 # # # Telnet admin interface listen address and port VARNISH_ADMIN_LISTEN_ADDRESS=127.0.0.1 VARNISH_ADMIN_LISTEN_PORT=6082 # # # Shared secret file for admin interface VARNISH_SECRET_FILE=/etc/varnish/secret # # # The minimum number of worker threads to start VARNISH_MIN_THREADS=1 # # # The Maximum number of worker threads to start VARNISH_MAX_THREADS=1000 # # # Idle timeout for worker threads VARNISH_THREAD_TIMEOUT=120 # # # Cache file location VARNISH_STORAGE_FILE=/var/lib/varnish/varnish_storage.bin # # # Cache file size: in bytes, optionally using k / M / G / T suffix, # # or in percentage of available disk space using the % suffix. VARNISH_STORAGE_SIZE=1G # # # Backend storage specification VARNISH_STORAGE="file,${VARNISH_STORAGE_FILE},${VARNISH_STORAGE_SIZE}" # # # Default TTL used when the backend does not specify one VARNISH_TTL=120 The VCL file that Varnish calls (through an include in default.vcl) consists of: backend playwithbits { .host = "127.0.0.1"; .port = "8080"; } acl purge { "127.0.0.1"; } sub vcl_recv { if (req.http.Host ~ "^(.*\.)?playwithbits\.com$") { set req.backend = playwithbits; set req.http.Host = regsub(req.http.Host, ":[0-9]+", ""); if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } return(lookup); } if (req.url ~ "^/$") { unset req.http.cookie; } } } sub vcl_hit { if (req.http.Host ~ "^(.*\.)?playwithbits\.com$") { if (req.request == "PURGE") { set obj.ttl = 0s; error 200 "Purged."; } } } sub vcl_miss { if (req.http.Host ~ "^(.*\.)?playwithbits\.com$") { if (req.request == "PURGE") { error 404 "Not in cache."; } if (!(req.url ~ "wp-(login|admin)")) { unset req.http.cookie; } if (req.url ~ "^/[^?]+.(jpeg|jpg|png|gif|ico|js|css|txt|gz|zip|lzma|bz2|tgz|tbz|html|htm)(\?.|)$") { unset req.http.cookie; set req.url = regsub(req.url, "\?.$", ""); } if (req.url ~ "^/$") { unset req.http.cookie; } } } sub vcl_fetch { if (req.http.Host ~ "^(.*\.)?playwithbits\.com$") { if (req.url ~ "^/$") { unset beresp.http.set-cookie; } if (!(req.url ~ "wp-(login|admin)")) { unset beresp.http.set-cookie; } } }

    Read the article

  • How to determine the Kerberos realm from an LDAP directory?

    - by tstm
    I have two Kerberos realms I can authenticate against. One of them I can control, and the other one is external from my point of view. I also have an internal user database in LDAP. Let's say the realms are INTERNAL.COM and EXTERNAL.COM. In ldap I have user entries like this: 1054 uid=testuser,ou=People,dc=tml,dc=hut,dc=fi shadowFlag: 0 shadowMin: -1 loginShell: /bin/bash shadowInactive: -1 displayName: User Test objectClass: top objectClass: account objectClass: posixAccount objectClass: shadowAccount objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson uidNumber: 1059 shadowWarning: 14 uid: testuser shadowMax: 99999 gidNumber: 1024 gecos: User Test sn: Test homeDirectory: /home/testuser mail: [email protected] givenName: User shadowLastChange: 15504 shadowExpire: 15522 cn: User.Test userPassword: {SASL}[email protected] What I would like to do, somehow, is to specify per-user basis to which authentication server / realm the user is authenticated against. Configuring kerberos to handle multiple realms is easy. But how to I configure other instances, like PAM, to handle the fact that some users are from INTERNAL.COM and some from EXTERNAL.COM? There needs to be an LDAP lookup of some kind where the realm and the authentication name is fetched from, and then the actual authentication itself. Is there a standardized way to add this information to LDAP, or look it up? Are there some other workarounds for a multi-realm user base? I might be ok with a single realm solution, too, as long as I can specify the user name - realm -combination for the user separately.

    Read the article

  • Odd IIS v6 .html file caching behavior: Ideas?

    - by samsmith
    Our loadbalancer looks for "loadbalancer.html" So today, I deleted the file, but in a browser I could still fetch it. My eyes crossed, I made sure, 3 ways, that the file was removed from the iis web site home dir. But fetching it gave a nice 200 http response, and the file. In desperation, I created a new loadbalancer.html, and put some test text in it, and then this new content was retrieved, and then I deleted this file, and THEN it finally gave the 404 as desired. How can I better understand/control this ? Thanks!

    Read the article

  • wifi routers and concurrent devices

    - by Joelio
    We have a Linksys WRT54G WiFi router in our office which was working great when we had 5-6 folks. Now on peak days we have 10-15 people, each with a computer, smartphone, etc, and an ooma VOIP device. On average 1-2 times a day I need to go hard reboot the router, and sometimes the border router (Cox-supplied device). I assume this is just because the router cant handle this many concurrent users. So my question is can these consumer routers handle this kind of load? If not, would adding more devices solve the problem, and how close proximity can I put 2 routers without having interference problems (our office area is not that big physically)?

    Read the article

  • Use procmail to deliver to stdout and a second server

    - by Halfgaar
    I would like a Postfix server to deliver each message to a certain transport as well as relay to a second server. In master.cf, I have the following transport: zarafa unix - n n - 10 pipe flags= user=vmail argv=/usr/bin/zarafa-dagent ${user} Because I can't get Postfix to deliver to two transports, what I probably need, is a wrapper transport, using procmail maybe, that delivers to zarafa-dagent and relays to a second server (not just forward to an address; relay to a second server). It can also be a script that calls sendmail or whatever, but at the moment, I don't know how to proceed.

    Read the article

  • Can my employer force me to backup my personal machine? [closed]

    - by Eric B
    Here's the background: Approximately 1.25 years ago, the company I work for was acquired by a larger 400 person company. Before acquisition (and today still) we are all remote employees using our own personal hardware for work-related duties (coding, email, etc). We are approximately 15 employees within the larger organization. Some time after acquisition, the now owning company was slapped with a civil lawsuit. Part of this lawsuit (discovery) is requiring them to retrieve & store from us any related information. Because we were a separate company up until acquisition, there is a high probability that our personal machines might contain information about what the lawsuit alleges (email, documents, chat logs?, etc). Obviously, this depends largely on the person's job function (engineer vs. customer support vs. CEO). All employees are being required to comply. Since acquisition (1.25 yrs), the new company has not provided us with company laptops/desktops. We continue to use personal hardware, licenses, etc for work. Email is via POP3s and not hanging around on the mail server - it's on everyone's client. Documents are spread across personal machines. So, now they want us each to backup our complete personal machines. They are allowing us to create a "personal" folder where we can place personal documents. That single folder will be excluded from backup. Of course, that means total re-arrangement of documents, etc. For most of us, 99% of the data on the machine is NOT related to work. So, what's the consensus? Should we comply? What is their recourse if we do not?

    Read the article

< Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >