Search Results

Search found 119217 results on 4769 pages for 'microsoft project server'.

Page 88/4769 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • How to access Ubuntu Server from local PC?

    - by Roland
    Today I installed my first web server, which is Ubuntu 12.04 LTS. I got Apache, PHP and MySql working, there is even MyPHPAdmin! Everything is working fine on that PC, but the problem is that I have no idea how to connect to this server from my PC. Just to clarify- I got one PC that I work on and got another one, which has Ubuntu Server running. I even managed to connect them through the router, which I made to work as a switch. I can see the Ubuntu Server on my Windows PC in "Network", but it's empty, I can't see any files. I tried to share a folder etc/www on Server, but it shows an error saying something about right, that I'm not this folder's owner. I guess I'm not doing the right thing at all, am I? Even if I could see shared folder on my Windows PC- I would still be not able to type "somedomain.com" on Windows PC and access for example index.php or MySql database. So, the question is- how do I configure Ubuntu Server to be accessible from Windows PC?

    Read the article

  • Directory service unavailiable, new hardware same settings

    - by Alex
    I'm working on a project with 2 sites connected by a VPN. Site 1 has the main server and there is a secondary server at site 2 which I am trying to replace. The current setup works perfectly however I can't for the life of me get the replacement server at site 2 up and running. I'm trying to replace like for like just upgraded hardware. I have installed the OS (all Server 2003 Standard SP2) and used exactly the same settings as the old server. I have setup Active Directory, DNS Server, DHCP Server and WINS Server configured. I have used all the same settings as the old server (except IP address and name). I can access the active directory but I can't do anything; add, edit, delete all returns "the directory service is unavaliable". No-one can login on any of the computers on site 2 and the internet is down. Plugging the old server back in and connecting it to the network rectifies the issue (so both new and old are connected at site 2), everyone can login and the internet is back (curious since the modem connects direct to the switch, and even with the new server online I can connect to the router via IP but not the net). I really don't have much experience but I've been roped into doing this because my company is too cheap to hire a real network admin. Any suggestions of where I can start to troubleshoot this, its driving me crazy and I only have a day before all the users are back on site.

    Read the article

  • How to fix Solr - Server is shutting down issue?

    - by Krunal
    I was having a running Solr 4.1 on Windows Server 2008 R2. The Solr is deployed on Tomcat. However, today it stops suddenly, and while accessing Solr it gives following error. HTTP Status 503 - Server is shutting down type Status report message Server is shutting down description The requested service is not currently available. On further looking into Logs, we got following: Log File: tomcat7-stderr.2013-05-09.txt May 09, 2013 8:00:40 PM org.apache.solr.core.CoreContainer finalize SEVERE: CoreContainer was not shutdown prior to finalize(), indicates a bug -- POSSIBLE RESOURCE LEAK!!! instance=2221663 Log File: catalina.2013-05-09.txt May 09, 2013 7:59:25 PM org.apache.solr.core.SolrResourceLoader <init> INFO: new SolrResourceLoader for directory: 'c:\solrdir\' May 09, 2013 7:59:29 PM org.apache.solr.common.SolrException log SEVERE: Exception during parsing file: null:org.xml.sax.SAXParseException; systemId: file:/c:/solr/solr.xml; lineNumber: 2; columnNumber: 6; The processing instruction target matching "[xX][mM][lL]" is not allowed. at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(Unknown Source) at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.fatalError(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLScanner.reportFatalError(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLScanner.scanPIData(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanPIData(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLScanner.scanPI(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$PrologDriver.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(Unknown Source) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(Unknown Source) at org.apache.solr.core.Config.<init>(Config.java:121) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:428) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:404) at org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:336) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:98) at org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:281) at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:262) at org.apache.catalina.core.ApplicationFilterConfig.<init>(ApplicationFilterConfig.java:107) at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4656) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5309) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:633) at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:977) at org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1655) at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) at java.util.concurrent.FutureTask.run(Unknown Source) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) May 09, 2013 7:59:29 PM org.apache.solr.servlet.SolrDispatchFilter init SEVERE: Could not start Solr. Check solr/home property and the logs May 09, 2013 7:59:29 PM org.apache.solr.common.SolrException log SEVERE: null:org.apache.solr.common.SolrException: at org.apache.solr.core.CoreContainer.load(CoreContainer.java:431) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:404) at org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:336) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:98) at org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:281) at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:262) at org.apache.catalina.core.ApplicationFilterConfig.<init>(ApplicationFilterConfig.java:107) at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4656) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5309) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:633) at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:977) at org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1655) at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) at java.util.concurrent.FutureTask.run(Unknown Source) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: org.xml.sax.SAXParseException; systemId: file:/c:/solrdir/solr.xml; lineNumber: 2; columnNumber: 6; The processing instruction target matching "[xX][mM][lL]" is not allowed. at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(Unknown Source) at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.fatalError(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLScanner.reportFatalError(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLScanner.scanPIData(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanPIData(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLScanner.scanPI(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$PrologDriver.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(Unknown Source) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(Unknown Source) at org.apache.solr.core.Config.<init>(Config.java:121) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:428) ... 20 more May 09, 2013 7:59:29 PM org.apache.solr.servlet.SolrDispatchFilter init INFO: SolrDispatchFilter.init() done May 09, 2013 7:59:29 PM org.apache.catalina.startup.HostConfig deployDirectory INFO: Deploying web application directory C:\Program Files (x86)\Apache Software Foundation\Tomcat 7.0\webapps\docs May 09, 2013 7:59:30 PM org.apache.catalina.startup.HostConfig deployDirectory INFO: Deploying web application directory C:\Program Files (x86)\Apache Software Foundation\Tomcat 7.0\webapps\manager May 09, 2013 7:59:30 PM org.apache.catalina.startup.HostConfig deployDirectory INFO: Deploying web application directory C:\Program Files (x86)\Apache Software Foundation\Tomcat 7.0\webapps\ROOT May 09, 2013 7:59:30 PM org.apache.coyote.AbstractProtocol start INFO: Starting ProtocolHandler ["http-bio-8983"] May 09, 2013 7:59:30 PM org.apache.coyote.AbstractProtocol start INFO: Starting ProtocolHandler ["ajp-bio-8009"] May 09, 2013 7:59:30 PM org.apache.catalina.startup.Catalina start INFO: Server startup in 9578 ms May 09, 2013 8:00:40 PM org.apache.solr.core.CoreContainer finalize SEVERE: CoreContainer was not shutdown prior to finalize(), indicates a bug -- POSSIBLE RESOURCE LEAK!!! instance=2221663 Any idea what could be wrong and how to fix?

    Read the article

  • SQL SERVER – What the Business Says Is Not What the Business Wants

    - by pinaldave
    This blog post is written in response to T-SQL Tuesday hosted by Steve Jones. Steve raised a very interesting question; every DBA and Database Developer has already faced this situation. When I read the topic, I felt that I can write several different examples here. Today, I will cover this scenario, which seems quite amusing. Shrinking Database Earlier this year, I was working on SQL Server Performance Tuning consultancy; I had faced very interesting situation. No matter how much I attempt to reduce the fragmentation, I always end up with heavy fragmentation on the server. After careful research, I figured out that one of the jobs was continuously Shrinking the Database – which is a very bad practice. I have blogged about my experience over here SQL SERVER – SHRINKDATABASE For Every Database in the SQL Server. I removed the incorrect shrinking process right away; once it was removed, everything continued working as it should be. After a couple of days, I learned that one of their DBAs had put back the same DBCC process. I requested the Senior DBA to find out what is going on and he came up with the following reason: “Business Requirement.” I cannot believe this! Now, it was time for me to go deep into the subject. Moreover, it had become necessary to understand the need. After talking to the concerned people here, I understood what they needed. Please read the exact business need in their own language. The Shrinking “Business Need” “We shrink the database because if we take backup after shrinking the database, the size of the same is smaller. Once we take backup, we have to send it to our remote location site. Our business requirement is that we need to always make sure that the file is smallest when we transfer it to remote server.” The backup is not affected in any way if you shrink the database or not. The size of backup will be the same. After a couple of the tests, they agreed with me. Shrinking will create performance issues for the same as it will introduce heavy fragmentation in the database. The Real Solution The real business need was that they needed the smallest possible backup file. We finally implemented a quick solution which they are still using to date. The solution was compressed backup. I have written about this subject in detail few years before SQL SERVER – 2008 – Introduction to New Feature of Backup Compression. Compressed backup not only creates a small filesize but also increases the speed of the database as well. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Configure TFS portal afterwards

    Update #1 January 8th, 2010: There is an updated post on this topic for Beta 2: http://www.ewaldhofman.nl/post/2009/12/10/Configure-TFS-portal-afterwards-Beta-2.aspx Update #2 October 10th, 2010: In the new Team Foundation Server Power Tools September 2010, there is now a command to create a portal. tfpt addprojectportal   Add or move portal for an existing team project Usage: tfpt addprojectportal /collection:uri                              /teamproject:"project name"                              /processtemplate:"template name"                              [/webapplication:"webappname"]                              [/relativepath:"pathfromwebapp"]                              [/validate]                              [/verbose] /collection Required. URL of Team Project Collection. /teamproject Required. Specifies the name of the team project. /processtemplate Required. Specifies that name of the process template. /webapplication The name of the SharePoint Web Application. Must also specify relativepath. /relativepath The path for the site relative to the root URL for the SharePoint Web Application. Must also specify webapplication. /validate Specifies that the user inputs are to be validated. If specified, only validation will be done and no portal setting will be changed. /verbose Switches on the verbose mode. I created a new Team Project in TFS 2010 Beta 1 and choose not to configure SharePoint during the creation of the Team Project. Of course I found out fairly quickly that a portal for TFS is very useful, especially the Iteration and the Product backlog workbooks and the dashboard reports. This blog describes how you can configure the sharepoint portal afterwards. Update: September 9th, 2009 Adding the portal afterwards is much easier as described below. Here are the steps Step 1: Create a new temporary project (with a SharePoint site for it). Open the Team Explorer Right click in the Team Explorer the root node (i.e. the project collection) Select "New team project" from the menu Walk throught he wizard and make sure you check the option to create the portal (which is by default checked) Step 2: Disable the site for the new project Open the Team Explorer Select the team project you created in step 1 In the menu click on Team -> Show Project Portal. In the menu click on Team -> Team Project Settings -> Portal Settings... The following dialog pops up Uncheck the option "Enable team project portal" Confirm the dialog with OK Step 3: Enable the site for the original one. Point it to the newly created site. Open the Team Explorer Select the team project you want to add the portal to In the menu open Team -> Team Project Settings -> Portal Settings... The same dialog as in step 2 pops up Check the option "Enable team project portal" Click on the "Configure URL" button The following dialog pops up   In the dialog select in the combobox of the web application the TFS server Enter in the Relative site path the text "sites/[Project Collection Name]/[Team Project Name created in step 1]" Confirm the "Specify an existing SharePoint Site" with OK Check the "Reports and dashboards refer to data for this team project" option Confirm the dialog "Project Portal Settings" with OK Step 4: Delete the temporary project you created. In Beta 1, I have found no way to delete a team project. Maybe it will be available in TFS 2010 Beta 2. Original post Step 1: Create new portal site Go to the sharepoint site of your project collection (/sites//default.aspx">/sites//default.aspx">http://<servername>/sites/<project_collection_name>/default.aspx) Click on the Site Actions at the left side of the screen and choose the option Site Settings In the site settings, choose the Sites and workspaces option Create a new site Enter the values for the Title, the description, the site address. And choose for the TFS2010 Agile Dashboard as template. Create the site, by clicking on the Create button Step 2: Integrate portal site with team project Open Visual Studio Open the Team Explorer (View -> Team Explorer) Select in the Team Explorer tool window the Team Project for which you are create a new portal Open the Project Portal Settings (Team -> Team Project Settings -> Portal Setings...) Check the Enable team project portal checkbox Click on Configure URL... You will get a new dialog as below Enter the url to the TFS server in the web application combobox And specify the relative site path: sites/<project collection>/<site name> Confirm with OK Check in the Project Portal Settings dialog the checkbox "Reports and dashboards refer to data for this team project" Confirm the settings with OK (this takes a while...) When you now browse to the portal, you will see that the dashboards are now showing up with the data for the current team project. Step 3: Download process template To get a copy of the documents that are default in a team project, we need to have a fresh set of files that are not attached to a team project yet. You can do that with the following steps. Start the Process Template Manager (Team -> Team Project Collection Settings -> Process Template Manager...) Choose the Agile process template and click on download Choose a folder to download Step 4: Add Product and Iteration backlog Go to the Team Explorer in Visual Studio Make sure the team project is in the list of team projects, and expand the team project Right click the Documents node, and choose New Document Library Enter "Shared Documents", and click on Add Right click the Shared Documents node and choose Upload Document Go the the file location where you stored the process template from step 3 and then navigate to the subdirectory "Agile Process Template 5.0\MSF for Agile Software Development v5.0\Windows SharePoint Services\Shared Documents\Project Management" Select in the Open Dialog the files "Iteration Backlog" and "Product Backlog", and click Open Step 5: Bind Iteration backlog workbook to the team project Right click on the "Iteration Backlog" file and select Edit, and confirm any warning messages Place your cursor in cell A1 of the Iteration backlog worksheet Switch to the Team ribbon and click New List. Select your Team Project and click Connect From the New List dialog, select the Iteration Backlog query in the Workbook Queries folder. The final step is to add a set of document properties that allow the workbook to communicate with the TFS reporting warehouse. Before we create the properties we need to collect some information about your project. The first piece of information comes from the table created in the previous step.  As you collect these properties, copy them into notepad so they can be used in later steps. Property How to retrieve the value? [Table name] Switch to the Design ribbon and select the Table Name value in the Properties portion of the ribbon [Project GUID] In the Visual Studio Team Explorer, right click your Team Project and select Properties.  Select the URL value and copy the GUID (long value with lots of characters) at the end of the URL [Team Project name] In the Properties dialog, select the Name field and copy the value [TFS server name] In the Properties dialog, select the Server Name field and copy the value [UPDATE] I have found that this is not correct: you need to specify the instance of your SQL Server. The value is used to create a connection to the TFS cube. Switch back to the Iteration Backlog workbook. Click the Office button and select Prepare – Properties. Click the Document Properties – Server drop down and select Advanced Properties. Switch to the Custom tab and add the following properties using the values you collected above. Variable name Value [Table name]_ASServerName [TFS server name] [Table name]_ASDatabase tfs_warehouse [Table name]_TeamProjectName [Team Project name] [Table name]_TeamProjectId [Project GUID] Click OK to close the properties dialog. It is possible that the Estimated Work (Hours) is showing the #REF! value. To resolve that change the formula with: =SUMIFS([Table name][Original Estimate]; [Table name][Iteration Path];CurrentIteration&"*";[Table name][Area Path];AreaPath&"*";[Table name][Work Item Type]; "Task") For example =SUMIFS(VSTS_ab392b55_6647_439a_bae4_8c66e908bc0d[Original Estimate]; VSTS_ab392b55_6647_439a_bae4_8c66e908bc0d[Iteration Path];CurrentIteration&"*";VSTS_ab392b55_6647_439a_bae4_8c66e908bc0d[Area Path];AreaPath&"*";VSTS_ab392b55_6647_439a_bae4_8c66e908bc0d[Work Item Type]; "Task") Also the Total Remaining Work in the Individual Capacity table may contain #REF! values. To resolve that change the formula with: =SUMIFS([Table name][Remaining Work]; [Table name][Iteration Path];CurrentIteration&"*";[Table name][Area Path];AreaPath&"*";[Table name][Assigned To];[Team Member];[Table name][Work Item Type]; "Task") For example =SUMIFS(VSTS_ab392b55_6647_439a_bae4_8c66e908bc0d[Remaining Work]; VSTS_ab392b55_6647_439a_bae4_8c66e908bc0d[Iteration Path];CurrentIteration&"*";VSTS_ab392b55_6647_439a_bae4_8c66e908bc0d[Area Path];AreaPath&"*";VSTS_ab392b55_6647_439a_bae4_8c66e908bc0d[Assigned To];[Team Member];VSTS_ab392b55_6647_439a_bae4_8c66e908bc0d[Work Item Type]; "Task") Save and close the workbook. Step 6: Bind Product backlog workbook to the team project Repeat the steps for binding the Iteration backlog for thiw workbook too. In the worksheet Capacity, the formula of the Storypoints might be missing. You can resolve it with: =IF([Iteration]="";"";SUMIFS([Table name][Story Points];[Table name][Iteration Path];[Iteration]&"*")) Example =IF([Iteration]="";"";SUMIFS(VSTS_487f1e4c_db30_4302_b5e8_bd80195bc2ec[Story Points];VSTS_487f1e4c_db30_4302_b5e8_bd80195bc2ec[Iteration Path];[Iteration]&"*"))

    Read the article

  • A dacpac limitation – Deploy dacpac wizard does not understand SqlCmd variables

    - by jamiet
    Since the release of SQL Server 2012 I have become a big fan of using dacpacs for deploying SQL Server databases (for reasons that I will explain some other day) and I chose to use a dacpac to distribute my recently announced utility sp_ssiscatalog (read: Introducing sp_ssiscatalog (v1.0.0.0)). Unfortunately if you read that blog post you may have taken note of the following: Ordinarily a dacpac can be deployed to a SQL Server from SSMS using the Deploy Dacpac wizard however in this case there is a limitation. Due to sp_ssiscatalog referring to objects in the SSIS Catalog (which it has to do of course) the dacpac contains a SqlCmd variable to store the name of the database that underpins the SSIS Catalog; unfortunately the Deploy Dacpac wizard in SSMS has a rather gaping limitation in that it cannot deploy dacpacs containing SqlCmd variables. I think it is worth calling out this limitation separately in this blog post because its a limitation that all dacpac users need to be aware of. If you try and deploy the dacpac containing sp_ssiscatalog using the wizard in SSMS then this is what you will see: TITLE: Microsoft SQL Server Management Studio ------------------------------ Could not deploy package. (Microsoft.SqlServer.Dac) ------------------------------ ADDITIONAL INFORMATION: Missing values for the following SqlCmd variables:SSISDB. (Microsoft.Data.Tools.Schema.Sql) ------------------------------ BUTTONS: OK ------------------------------ The message is quite correct. The SSDT DB project that I used to build this dacpac *does* have a SqlCmd variable in it called SSISDB: Quite simply, the Dac Deployment wizard in SSMS is not capable of deploying such dacpacs. Your only option for deploying such dacpacs is to use the command-line tool sqlpackage.exe. Generally I use sqlpackage.exe anyway (which is why it has taken me months to encounter the aforementioned problem) and have found it preferable to using a GUI-based wizard. Your mileage may vary. @Jamiet

    Read the article

  • Tellago 2011: Dwight, Chris and Don are MVPs

    - by gsusx
    It’s been a great start of 2011. Tellago’s Dwight Goins has been awarded as a Microsoft BizTalk Server MVP for 2011. I’ve always said that Dwight should have been an MVP a long time ago. His contributions to the BizTalk Server community are nothing but remarkable. In addition to Dwight, my colleagues Don Demsak and Chris Love also renewed their respective MVP award. A few other of us are up for renewal later in the year. As a recognition to Dwight’s award, we have made him the designated doorman...(read more)

    Read the article

  • What would be the optimal disk config for SQL Server 2008 R2?

    - by Kev
    We have a new Dell R710 server that came with the following storage configuration: 8 x 146GB SAS 10k 6Gbps disks 1 x Perc H700 Integrated Controller (2 x 4 disks - 2 ports each supporting 4 disks) What would be the optimal configuration if we were just after performance? What would be the optimal configuration if we were after performance but wanted data resilience. As per 2 above but with a hot standby disk? We plan to run Windows 2008 R2 and SQL Server 2008 R2. Maximising storage capacity isn't a prime concern.

    Read the article

  • Oracle BI Server Modeling, Part 1- Designing a Query Factory

    - by bob.ertl(at)oracle.com
      Welcome to Oracle BI Development's BI Foundation blog, focused on helping you get the most value from your Oracle Business Intelligence Enterprise Edition (BI EE) platform deployments.  In my first series of posts, I plan to show developers the concepts and best practices for modeling in the Common Enterprise Information Model (CEIM), the semantic layer of Oracle BI EE.  In this segment, I will lay the groundwork for the modeling concepts.  First, I will cover the big picture of how the BI Server fits into the system, and how the CEIM controls the query processing. Oracle BI EE Query Cycle The purpose of the Oracle BI Server is to bridge the gap between the presentation services and the data sources.  There are typically a variety of data sources in a variety of technologies: relational, normalized transaction systems; relational star-schema data warehouses and marts; multidimensional analytic cubes and financial applications; flat files, Excel files, XML files, and so on. Business datasets can reside in a single type of source, or, most of the time, are spread across various types of sources. Presentation services users are generally business people who need to be able to query that set of sources without any knowledge of technologies, schemas, or how sources are organized in their company. They think of business analysis in terms of measures with specific calculations, hierarchical dimensions for breaking those measures down, and detailed reports of the business transactions themselves.  Most of them create queries without knowing it, by picking a dashboard page and some filters.  Others create their own analysis by selecting metrics and dimensional attributes, and possibly creating additional calculations. The BI Server bridges that gap from simple business terms to technical physical queries by exposing just the business focused measures and dimensional attributes that business people can use in their analyses and dashboards.   After they make their selections and start the analysis, the BI Server plans the best way to query the data sources, writes the optimized sequence of physical queries to those sources, post-processes the results, and presents them to the client as a single result set suitable for tables, pivots and charts. The CEIM is a model that controls the processing of the BI Server.  It provides the subject areas that presentation services exposes for business users to select simplified metrics and dimensional attributes for their analysis.  It models the mappings to the physical data access, the calculations and logical transformations, and the data access security rules.  The CEIM consists of metadata stored in the repository, authored by developers using the Administration Tool client.     Presentation services and other query clients create their queries in BI EE's SQL-92 language, called Logical SQL or LSQL.  The API simply uses ODBC or JDBC to pass the query to the BI Server.  Presentation services writes the LSQL query in terms of the simplified objects presented to the users.  The BI Server creates a query plan, and rewrites the LSQL into fully-detailed SQL or other languages suitable for querying the physical sources.  For example, the LSQL on the left below was rewritten into the physical SQL for an Oracle 11g database on the right. Logical SQL   Physical SQL SELECT "D0 Time"."T02 Per Name Month" saw_0, "D4 Product"."P01  Product" saw_1, "F2 Units"."2-01  Billed Qty  (Sum All)" saw_2 FROM "Sample Sales" ORDER BY saw_0, saw_1       WITH SAWITH0 AS ( select T986.Per_Name_Month as c1, T879.Prod_Dsc as c2,      sum(T835.Units) as c3, T879.Prod_Key as c4 from      Product T879 /* A05 Product */ ,      Time_Mth T986 /* A08 Time Mth */ ,      FactsRev T835 /* A11 Revenue (Billed Time Join) */ where ( T835.Prod_Key = T879.Prod_Key and T835.Bill_Mth = T986.Row_Wid) group by T879.Prod_Dsc, T879.Prod_Key, T986.Per_Name_Month ) select SAWITH0.c1 as c1, SAWITH0.c2 as c2, SAWITH0.c3 as c3 from SAWITH0 order by c1, c2   Probably everybody reading this blog can write SQL or MDX.  However, the trick in designing the CEIM is that you are modeling a query-generation factory.  Rather than hand-crafting individual queries, you model behavior and relationships, thus configuring the BI Server machinery to manufacture millions of different queries in response to random user requests.  This mass production requires a different mindset and approach than when you are designing individual SQL statements in tools such as Oracle SQL Developer, Oracle Hyperion Interactive Reporting (formerly Brio), or Oracle BI Publisher.   The Structure of the Common Enterprise Information Model (CEIM) The CEIM has a unique structure specifically for modeling the relationships and behaviors that fill the gap from logical user requests to physical data source queries and back to the result.  The model divides the functionality into three specialized layers, called Presentation, Business Model and Mapping, and Physical, as shown below. Presentation services clients can generally only see the presentation layer, and the objects in the presentation layer are normally the only ones used in the LSQL request.  When a request comes into the BI Server from presentation services or another client, the relationships and objects in the model allow the BI Server to select the appropriate data sources, create a query plan, and generate the physical queries.  That's the left to right flow in the diagram below.  When the results come back from the data source queries, the right to left relationships in the model show how to transform the results and perform any final calculations and functions that could not be pushed down to the databases.   Business Model Think of the business model as the heart of the CEIM you are designing.  This is where you define the analytic behavior seen by the users, and the superset library of metric and dimension objects available to the user community as a whole.  It also provides the baseline business-friendly names and user-readable dictionary.  For these reasons, it is often called the "logical" model--it is a virtual database schema that persists no data, but can be queried as if it is a database. The business model always has a dimensional shape (more on this in future posts), and its simple shape and terminology hides the complexity of the source data models. Besides hiding complexity and normalizing terminology, this layer adds most of the analytic value, as well.  This is where you define the rich, dimensional behavior of the metrics and complex business calculations, as well as the conformed dimensions and hierarchies.  It contributes to the ease of use for business users, since the dimensional metric definitions apply in any context of filters and drill-downs, and the conformed dimensions enable dashboard-wide filters and guided analysis links that bring context along from one page to the next.  The conformed dimensions also provide a key to hiding the complexity of many sources, including federation of different databases, behind the simple business model. Note that the expression language in this layer is LSQL, so that any expression can be rewritten into any data source's query language at run time.  This is important for federation, where a given logical object can map to several different physical objects in different databases.  It is also important to portability of the CEIM to different database brands, which is a key requirement for Oracle's BI Applications products. Your requirements process with your user community will mostly affect the business model.  This is where you will define most of the things they specifically ask for, such as metric definitions.  For this reason, many of the best-practice methodologies of our consulting partners start with the high-level definition of this layer. Physical Model The physical model connects the business model that meets your users' requirements to the reality of the data sources you have available. In the query factory analogy, think of the physical layer as the bill of materials for generating physical queries.  Every schema, table, column, join, cube, hierarchy, etc., that will appear in any physical query manufactured at run time must be modeled here at design time. Each physical data source will have its own physical model, or "database" object in the CEIM.  The shape of each physical model matches the shape of its physical source.  In other words, if the source is normalized relational, the physical model will mimic that normalized shape.  If it is a hypercube, the physical model will have a hypercube shape.  If it is a flat file, it will have a denormalized tabular shape. To aid in query optimization, the physical layer also tracks the specifics of the database brand and release.  This allows the BI Server to make the most of each physical source's distinct capabilities, writing queries in its syntax, and using its specific functions. This allows the BI Server to push processing work as deep as possible into the physical source, which minimizes data movement and takes full advantage of the database's own optimizer.  For most data sources, native APIs are used to further optimize performance and functionality. The value of having a distinct separation between the logical (business) and physical models is encapsulation of the physical characteristics.  This encapsulation is another enabler of packaged BI applications and federation.  It is also key to hiding the complex shapes and relationships in the physical sources from the end users.  Consider a routine drill-down in the business model: physically, it can require a drill-through where the first query is MDX to a multidimensional cube, followed by the drill-down query in SQL to a normalized relational database.  The only difference from the user's point of view is that the 2nd query added a more detailed dimension level column - everything else was the same. Mappings Within the Business Model and Mapping Layer, the mappings provide the binding from each logical column and join in the dimensional business model, to each of the objects that can provide its data in the physical layer.  When there is more than one option for a physical source, rules in the mappings are applied to the query context to determine which of the data sources should be hit, and how to combine their results if more than one is used.  These rules specify aggregate navigation, vertical partitioning (fragmentation), and horizontal partitioning, any of which can be federated across multiple, heterogeneous sources.  These mappings are usually the most sophisticated part of the CEIM. Presentation You might think of the presentation layer as a set of very simple relational-like views into the business model.  Over ODBC/JDBC, they present a relational catalog consisting of databases, tables and columns.  For business users, presentation services interprets these as subject areas, folders and columns, respectively.  (Note that in 10g, subject areas were called presentation catalogs in the CEIM.  In this blog, I will stick to 11g terminology.)  Generally speaking, presentation services and other clients can query only these objects (there are exceptions for certain clients such as BI Publisher and Essbase Studio). The purpose of the presentation layer is to specialize the business model for different categories of users.  Based on a user's role, they will be restricted to specific subject areas, tables and columns for security.  The breakdown of the model into multiple subject areas organizes the content for users, and subjects superfluous to a particular business role can be hidden from that set of users.  Customized names and descriptions can be used to override the business model names for a specific audience.  Variables in the object names can be used for localization. For these reasons, you are better off thinking of the tables in the presentation layer as folders than as strict relational tables.  The real semantics of tables and how they function is in the business model, and any grouping of columns can be included in any table in the presentation layer.  In 11g, an LSQL query can also span multiple presentation subject areas, as long as they map to the same business model. Other Model Objects There are some objects that apply to multiple layers.  These include security-related objects, such as application roles, users, data filters, and query limits (governors).  There are also variables you can use in parameters and expressions, and initialization blocks for loading their initial values on a static or user session basis.  Finally, there are Multi-User Development (MUD) projects for developers to check out units of work, and objects for the marketing feature used by our packaged customer relationship management (CRM) software.   The Query Factory At this point, you should have a grasp on the query factory concept.  When developing the CEIM model, you are configuring the BI Server to automatically manufacture millions of queries in response to random user requests. You do this by defining the analytic behavior in the business model, mapping that to the physical data sources, and exposing it through the presentation layer's role-based subject areas. While configuring mass production requires a different mindset than when you hand-craft individual SQL or MDX statements, it builds on the modeling and query concepts you already understand. The following posts in this series will walk through the CEIM modeling concepts and best practices in detail.  We will initially review dimensional concepts so you can understand the business model, and then present a pattern-based approach to learning the mappings from a variety of physical schema shapes and deployments to the dimensional model.  Along the way, we will also present the dimensional calculation template, and learn how to configure the many additivity patterns.

    Read the article

  • .net Framework won't install on Server 2003 SP2 x64

    - by Yvan JANSSENS
    Hi, When I install the .net Framework 3.5 SP1 on my rental VPS, I get the message that setup has failed. It's a Server 2003 VPS w/ SP2 installed (64-bit). The .net Framework v 2.0 installed correctly. How do I fix this? This is the installation log: [03/10/10,07:44:46] Microsoft .NET Framework 2.0a x64: [2] Failed to fetch setup file in CBaseComponent::PreInstall() [03/10/10,07:44:47] setup.exe: [2] ISetupComponent::Pre/Post/Install() failed in ISetupManager::InternalInstallManager() with HRESULT -2147467260. [03/10/10,07:44:48] setup.exe: [2] CSetupManager::RunInstallPhase() - Call to Pre/Install/Post for InstallComponents failed [03/10/10,07:44:49] setup.exe: [2] CSetupManager::RunInstallPhaseAndCheckResults() - RunInstallPhase() returned a NULL piActionResults [03/10/10,07:44:49] setup.exe: [2] CSetupManager::RunInstallFromList() - RunInstallPhaseAndCheckResults failed [2] [03/10/10,07:44:51] setup.exe: [2] ISetupManager::RunInstallLists(IP_PREINSTALL failed in ISetupManager::RunInstallFromThread() [03/10/10,07:44:52] setup.exe: [2] ISetupManager::RunInstallFromThread() failed in ISetupManager::RunInstall() [03/10/10,07:44:53] setup.exe: [2] CSetupManager::Run() - Call to RunInstall() failed [03/10/10,07:44:59] WapUI: [2] DepCheck indicates Microsoft .NET Framework 2.0a x64 is not installed. [03/10/10,07:45:00] WapUI: [2] DepCheck indicates XPSEPSC x64 Installer was not attempted to be installed. [03/10/10,07:45:02] WapUI: [2] DepCheck indicates Microsoft .NET Framework 3.0 SP2 x64 was not attempted to be installed. [03/10/10,07:45:02] WapUI: [2] DepCheck indicates Microsoft .NET Framework 3.5 (x64) 'package' was not attempted to be installed. [03/11/10,14:19:23] Microsoft .NET Framework 3.0 SP2 x64: [2] Error: Installation failed for component Microsoft .NET Framework 3.0 SP2 x64. MSI returned error code 1604 [03/11/10,14:26:14] WapUI: [2] DepCheck indicates Microsoft .NET Framework 3.0 SP2 x64 is not installed. Thanks!! Yvan

    Read the article

  • [Zend Framework - Ubuntu10.04- Lamp- First Project] i get 500 error on http://localhost/zftutorial/p

    - by meyosef
    Hi I new in Zend and Lamp, my software: Zend Framework, Ubuntu10.04,Lamp. I made my first Zend Project with Zend tool (according this tutorial http://akrabat.com/wp-content/uploads/Getting-Started-with-Zend-Framework.pdf) But when i go to http://localhost/zftutorial/public i get 500 error. My $ dir -l of zftutorial: drwxr-xr-x 6 root root 4096 2010-06-01 23:54 application drwxr-xr-x 2 root root 4096 2010-06-01 23:54 docs drwxr-xr-x 3 root root 4096 2010-06-02 00:23 library drwxr-xr-x 3 root root 4096 2010-06-02 00:00 nbproject drwxr-xr-x 2 root root 4096 2010-06-01 23:54 public drwxr-xr-x 4 root root 4096 2010-06-01 23:54 tests my:/etc/apache2/sites-available/default <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> Thanks,

    Read the article

  • A dacpac limitation – Deploy dacpac wizard does not understand SqlCmd variables

    - by jamiet
    Since the release of SQL Server 2012 I have become a big fan of using dacpacs for deploying SQL Server databases (for reasons that I will explain some other day) and I chose to use a dacpac to distribute my recently announced utility sp_ssiscatalog (read: Introducing sp_ssiscatalog (v1.0.0.0)). Unfortunately if you read that blog post you may have taken note of the following: Ordinarily a dacpac can be deployed to a SQL Server from SSMS using the Deploy Dacpac wizard however in this case there is a limitation. Due to sp_ssiscatalog referring to objects in the SSIS Catalog (which it has to do of course) the dacpac contains a SqlCmd variable to store the name of the database that underpins the SSIS Catalog; unfortunately the Deploy Dacpac wizard in SSMS has a rather gaping limitation in that it cannot deploy dacpacs containing SqlCmd variables. I think it is worth calling out this limitation separately in this blog post because its a limitation that all dacpac users need to be aware of. If you try and deploy the dacpac containing sp_ssiscatalog using the wizard in SSMS then this is what you will see: TITLE: Microsoft SQL Server Management Studio ------------------------------ Could not deploy package. (Microsoft.SqlServer.Dac) ------------------------------ ADDITIONAL INFORMATION: Missing values for the following SqlCmd variables:SSISDB. (Microsoft.Data.Tools.Schema.Sql) ------------------------------ BUTTONS: OK ------------------------------ The message is quite correct. The SSDT DB project that I used to build this dacpac *does* have a SqlCmd variable in it called SSISDB: Quite simply, the Dac Deployment wizard in SSMS is not capable of deploying such dacpacs. Your only option for deploying such dacpacs is to use the command-line tool sqlpackage.exe. Generally I use sqlpackage.exe anyway (which is why it has taken me months to encounter the aforementioned problem) and have found it preferable to using a GUI-based wizard. Your mileage may vary. @Jamiet

    Read the article

  • How can I see logs in a server after a kernel panic hang ?

    - by Low Kian Seong
    I am running a production gentoo Linux machine, and recently there was a situation where the server hung in my co-located premises and when I got there I noticed that the server was hung on what appeared to be a kernel panic hang. I rebooted the machine with a hard reboot and was disappointed to find out that I could not find a shred of evidence anywhere on why the machine hung. Is it true that when I do a hard reboot the messages itself will get lost or is there a setting I can do somewhere say in syslog-ng or maybe in sysctl to at least preserve the error log so that I can prevent such mishaps from happening in the future ? I am running a 2.6.x kernel by the way. Thanks in advance.

    Read the article

  • Building in Change: Project Construction in Asset Intensive Industries

    - by Sylvie MacKenzie, PMP
    According to a recent survey by the Economist Intelligence Unit, sponsored by Oracle, only 51% of project owners rated themselves as effective at delivering their projects to scope, budget, and schedule when confronted with change. In addition only 43% rated themselves as effective at anticipating potential change. Even with the best processes and technology in place, change is often an unavoidable part of the construction process. How organizations respond to change can mean the difference between delays and cost overruns, and projects being completed on schedule and on budget. Implementing Enterprise Project Portfolio Management and using a solution to help manage and automate those process can help asset intensive organizations: Govern project and program compliance and regulatory requirements for project success Unite project teams and stakeholders through collaboration and strong feedback methods to speed project completion Reduce the risk of cost and schedule overruns and any resulting penalties to deliver on time and on budget Effectively manage change throughout the project life cycle Ensure sufficient capacity, utilization, and availability of people, skills, and other resources to meet commitments. The results of the recent EIU survey, sponsored by Oracle:"Building in Change: Project Construction in Asset-Intensive Industries", will be revealed in an upcoming webinar with Hart Energy / Oil & Gas Investor, featuring the Economist Intelligence Unit and Oracle on April 11th at 1pm CST. Click here for further information or visit http://www.oilandgasinvestor.com/

    Read the article

  • Microsoft hosting free Hyper-V training for VMware Pros

    - by Ryan Roussel
    Microsoft will be hosting free training for virtualization professionals focused on Hyper-V, System Center, and virtualization architecture.  Details are below:   Just one week after Microsoft Management Summit 2011 (MMS), Microsoft Learning will be hosting an exclusive three-day Jump Start class specially tailored for VMware and Microsoft virtualization technology pros.  Registration for “Microsoft Virtualization for VMware Professionals” is open now and will be delivered as an online class on March 29-31, 2010 from 10:00am-4:00pm PDT.    The course is COMPLETELY FREE and OPEN TO ANYONE!  Please share with your customers, blog, Tweet, etc. – help us get the word out to strengthen support for Microsoft’s virtualization offerings. What’s the high-level overview? This cutting edge course will feature expert instruction and real-world demonstrations of Hyper-V and brand new releases from System Center Virtual Machine Manager 2012 Beta (many of which will be announced just one week earlier at MMS).  Register Now!   Day 1 will focus on “Platform” (Hyper-V, virtualization architecture, high availability & clustering) 10:00am – 10:30pm PDT:  Virtualization 360 Overview 10:30am – 12:00pm:  Microsoft Hyper-V Deployment Options & Architecture 1:00pm – 2:00pm:  Differentiating Microsoft and VMware (terminology, etc.) 2:00pm – 4:00pm:  High Availability & Clustering Day 2 will focus on “Management” (System Center Suite, SCVMM 2012 Beta, Opalis, Private Cloud solutions) 10:00am – 11:00pm PDT:  System Center Suite Overview w/ focus on DPM 11:00am – 12:00pm:  Virtual Machine Manager 2012 | Part 1 1:00pm –   1:30pm:  Virtual Machine Manager 2012 | Part 2 1:30pm – 2:30pm:  Automation with System Center Opalis & PowerShell 2:30pm – 4:00pm:  Private Cloud Solutions, Architecture & VMM SSP 2.0 Day 3 will focus on “VDI” (VDI Infrastructure/architecture, v-Alliance, application delivery via VDI) 10:00am – 11:00pm PDT:  Virtual Desktop Infrastructure (VDI) Architecture | Part 1 11:00am – 12:00pm:  Virtual Desktop Infrastructure (VDI) Architecture | Part 2 1:00pm – 2:30pm:  v-Alliance Solution Overview 2:30pm – 4:00pm:  Application Delivery for VDI     Every section will be team-taught by two of the most respected authorities on virtualization technologies: Microsoft Technical Evangelist Symon Perriman and leading Hyper-V, VMware, and XEN infrastructure consultant, Corey Hynes Who is the target audience for this training? Suggested prerequisite skills include real-world experience with Windows Server 2008 R2, virtualization and datacenter management. The course is tailored to these types of roles: · IT Professional · IT Decision Maker · Network Administrators & Architects · Storage/Infrastructure Administrators & Architects How do I to register and learn more about this great training opportunity? · Register: Visit the Registration Page and sign up for all three sessions · Blog: Learn more from the Microsoft Learning Blog · Twitter: Here are a few posts you can retweet: o Mar. 29-31 "Microsoft #Virtualization for VMware Pros" @SymonPerriman Corey Hynes http://bit.ly/JS-Hyper-V @MSLearning #Hyper-V o @SysCtrOpalis Mar. 29-31 "Microsoft #Virtualization for VMware Pros" @SymonPerriman Corey Hynes http://bit.ly/JS-Hyper-V #Hyper-V o Learn all the cool new features in Hyper-V & System Center 2012! SCVMM, Self-Service Portal 2.0, http://bit.ly/JS-Hyper-V #Hyper-V #Opalis What is a “Jump Start” course? A “Jump Start” course is “team-taught” by two expert instructors in an engaging radio talk show style format. The idea is to deliver readiness training on strategic and emerging technologies that drive awareness at scale before Microsoft Learning develops mainstream Microsoft Official Courses (MOC) that map to certifications.  All sessions are professionally recorded and distributed through MS Showcase, Channel 9, Zune Marketplace and iTunes for broader reach.

    Read the article

  • How to Transpose Rows and Columns in Excel 2013

    - by Lori Kaufman
    You’ve setup your worksheet with all your row and column headings and you’ve entered all your data. Then, you discover that it would look better if the rows were the columns and the columns were the rows. How do you accomplish this easily? There is an easy way to convert your rows to columns and vice versa using the Transpose feature in Excel. We’ll show you how. Select the cells containing the headings and data you want to transpose. Click Copy or press Ctrl + C. Click in a blank cell on the spreadsheet. This cell will be the top, left corner of the new table of data. Click the down arrow on the Paste button and select Paste Special from the drop-down menu. On the Paste Special dialog box, select the Transpose check box so there is a check mark in the box and click OK. The rows become the columns and the columns become the rows. The original set of data still exists. You can select those cells and delete the headings and data, if desired. Isn’t that a lot easier and faster than retyping all your data?     

    Read the article

  • Prioritize One Network Share Over Another And/Or Cap Network Share Traffic? (Windows Server 2008 R2 Enterprise)

    - by FullTimeCoderPartTimeSysAdmin
    One of my fileservers is a hyper v VM running Windows Server 2008 R2 Enterprise. The NIC in the VM maps to a 1 GB NIC on the host that is dedicated just to this VM. I have two shares on the file server. One is very important and used by a few users. The other is less important but used by many users. The issue I'm having: When a ton of users are accessing the unimportant share, it can choke out requests to the important share. What I'd like to do: I'd like to some prioritize requests for for file on the more important share, or even dedicate a portion of the NIC's bandwidth just to requests for files on that share. Is there any way to do that? Alternately, can I add another NIC and specify that all traffic to one share goes over one NIC and traffic to the other share goes over the other?

    Read the article

  • How to install Microsoft Exchange 2007 as a member server

    - by O_O
    I am trying to install Microsoft Exchange 2007 to a Windows Server 2003 as a member server. I already have a Windows Server 2008 as my domain controller. I'm having a hard time figuring out what is needed to prepare the machine for Exchange 2007 installation. My specific question is: While following the procedures here in the TechNet Library , do I still need to go through with the section "How to Prepare Active Directory and Domains" and do the following commands if I am making it a member server and NOT a domain controller? ie.. setup /ps setup /p /on: setup /PrepareDomain: Thank you.

    Read the article

  • change default port of IIS and let another process to listen on port 80 (Windows Server 2008)

    - by aleroot
    I have an installation of Windows Server 2008 running IIS 6 with a website listening on port 8080, even though I have moved the website to listen on 8080, port 80 is still kept in use by IIS (for truth by the kernel process : System - ProcId : 4). I want to let another process listen on port 80 without uninstalling or disabling IIS, I want to keep IIS listening on port 8080 and another service on port 80, is there a way to do it? I saw another similar thread here on serverfault but the solution (using httpcfg.exe delete iplisten -i 0.0.0.0:80 ) can work only in 2003 because in 2008 the utility httpcfg.exe doesn't exist and it seems that it cannot be installed ... Does anyone have a solution to get rid of the kernel listening on port 80 in Windows Server 2008 with IIS running?

    Read the article

  • How to Handle Managing a Coding Project With 8 Friends?

    - by Raul
    I usually code by myself but currently I need to do a java web-based project with 8 of my friends. I would like to ask the following questions: 1) How to document the development properly? Like how to keep a daily log? Any software or format suggested? What things do you think are important to be included in the log? 2) How to code together? Is there any software/IDE that allows a team to code together? Something ike google docs? 3) How to do a proper backup for a team project? Any software or tips to share?

    Read the article

  • Should we be able to deploy a single package to the SSIS Catalog?

    - by jamiet
    My buddy Sutha Thiru sent me an email recently asking about my opinion on a particular nuance of the project deployment model in SQL Server Integration Services (SSIS) 2012 and I’d like to share my response as I think it warrants a wider discussion. Sutha asked: Jamie What is your take on this? http://www.mattmasson.com/index.php/2012/07/can-i-deploy-a-single-ssis-package-from-my-project-to-the-ssis-catalog/ Overnight I was talking to Matt who confirmed that they got no plans to change the deployment model. For example if we have following scenrio how do we do deploy? Sprint 1 Pkg1, 2 & 3 has been developed and deployed to UAT. Once signed off its been deployed to Live. Sprint 2 Pkg 4 & 5 been developed. During this time users raised a bug on Pkg2. We want to make the change to Pkg2 and deploy that to UAT and eventually to LIVE without releasing Pkg 4 &5. How do we do it? Matt pointed me to his blog entry which I have seen before . http://www.mattmasson.com/index.php/2012/02/thoughts-on-branching-strategies-for-ssis-projects/ Thanks Sutha My response: Personally, even though I've experienced the exact problem you just outlined, I agree with the current approach. I steadfastly believe that there should not be a way for an unscrupulous developer to slide in a new version of a package under the covers. Deploying .ispac files brings a degree of rigour to your operational processes. Yes, that means that we as SSIS developers are going to have to get better at using source control and branching properly but that is no bad thing in my opinion. Claiming to be proper "developers" is a bit of a cheap claim if we don't even do the fundamentals correctly. I would be interested in the thoughts of others who have used the project deployment model. Do you agree with my point of view? @Jamiet

    Read the article

  • SQL SERVER – Storing 64-bit Unsigned Integer Value in Database

    - by Pinal Dave
    Here is a very interesting question I received in an email just another day. Some questions just are so good that it makes me wonder how come I have not faced it first hand. Anyway here is the question - “Pinal, I am migrating my database from MySQL to SQL Server and I have faced unique situation. I have been using Unsigned 64-bit integer in MySQL but when I try to migrate that column to SQL Server, I am facing an issue as there is no datatype which I find appropriate for my column. It is now too late to change the datatype and I need immediate solution. One chain of thought was to change the data type of the column from Unsigned 64-bit (BIGINT) to VARCHAR(n) but that will just change the data type for me such that I will face quite a lot of performance related issues in future. In SQL Server we also have the BIGINT data type but that is Signed 64-bit datatype. BIGINT datatype in SQL Server have range of -2^63 (-9,223,372,036,854,775,808) to 2^63-1 (9,223,372,036,854,775,807). However, my digit is much larger than this number. Is there anyway, I can store my big 64-bit Unsigned Integer without loosing much of the performance of by converting it to VARCHAR.” Very interesting question, for the sake of the argument, we can ask user that there should be no need of such a big number or if you are taking about identity column I really doubt that if your table will grow beyond this table. Here the real question which I found interesting was how to store 64-bit unsigned integer value in SQL Server without converting it to String data type. After thinking a bit, I found a fairly simple answer. I can use NUMERIC data type. I can use NUMERIC(20) datatype for 64-bit unsigned integer value, NUMERIC(10) datatype for 32-bit unsigned integer value and NUMERIC(5) datatype for 16-bit unsigned integer value. Numeric datatype supports 38 maximum of 38 precision. Now here is another thing to keep in mind. Using NUMERIC datatype will indeed accept the 64-bit unsigned integer but in future if you try to enter negative value, it will also allow the same. Hence, you will need to put any additional constraint over column to only accept positive integer there. Here is another big concern, SQL Server will store the number as numeric and will treat that as a positive integer for all the practical purpose. You will have to write in your application logic to interpret that as a 64-bit Unsigned Integer. On another side if you are using unsigned integers in your application, there are good chance that you already have logic taking care of the same. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Datatype

    Read the article

  • SQL SERVER – Free eBook Download – EPUB, MOBI, PDF Format

    - by pinaldave
    Microsoft has released recently free eBooks on various Microsoft Technology. The best part is that all these books are available in ePub, Mobi and PDF. You can download them to your local machine or eBook reader and read them. This is a great start as many important subjects are now covered and converted into an eBook. I personally read through a few of the books and found they are very comprehensive and and detailed. The goal is not to cover complete technology in a single book but rather pick a single topic and discuss it in detail. The source of the book is white paper, Technet wiki as well book online and it is clearly listed right bellow the book title. Following are the books available for SQL Server Technology and I encourage all of you to have a look at them as they are great resources. Master Data Services Capacity Guidelines Microsoft SQL Server AlwaysOn Solutions Guide for High Availability and Disaster Recovery Microsoft SQL Server Analysis Services Multidimensional Performance and Operations Guide QuickStart: Learn DAX Basics in 30 Minutes SQL Server 2012 Transact-SQL DML Reference You can download above eBooks from here. This is indeed a great attempt as each book talks about the a single subject in depth keeping author focus on the single and simple subject. I have previously written two books by focusing on the same subject and I had great pleasure writing it as well. Writing on focus subjects gives complete freedom to author to explore the a single subject without having burden to cover everything which is associated with that technology at large. Just like eBooks mentioned earlier my SQL Server Wait Stats was inspired from my article series on SQL Wait Stats. The latest book SQL Server Interview Question and Answers was derived from my article series on SQL Interview Q and A. Writing book is an absolutely different concept than writing blog posts. When I was converting my blog posts to books, I ended up writing 50% new material and end up removing many repetitive content which shows up in blog series. It was indeed fun to focused book at the same time it was a great learning experience as an individual. Reference: TechNet Wiki, Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How do I Series: Connecting an Expression Blend Project to Team Foundation Server

    - by Enrique Lima
    I have heard of people wanting and needing to add projects created in Expression Blend to Team Foundation Server. Here is the recipe: 1) Create your project in Expression Blend … click OK 2) Select the option to open your recently created project in Visual Studio. Once that option is selected, your solution will open up in Visual Studio, close Expression Blend at this point. Now, I want to add this project to Source Control … Next, I connect to my TFS environment, and pick the location to save my project Once the project is added, I will get a status window of pending changes for my project, all that we are left to do is to check in those changes. Since we have checked in our project, we can now close Visual Studio, and we will proceed to open Expression Blend again. And select our project we will! We notice some differences from before, just by opening it What differences you say?!? Notice the lock to the right of the item name … And we also get this when we right click … And there we have it, it is a combination of tools to achieve this, but it is well worth it.

    Read the article

  • How to Freeze and Unfreeze Rows and Columns in Excel 2013

    - by Lori Kaufman
    If you are working on a large spreadsheet where all the rows and columns of data don’t fit on the screen, it would be helpful to be able to keep the heading rows and columns stationary so you can scroll through the data. You can freeze rows and columns in your spreadsheet. To do so, select the cell above which and to the left of which you want to freeze the columns and rows. Click the View tab.    

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >