Search Results

Search found 86974 results on 3479 pages for 'visualsvn server'.

Page 803/3479 | < Previous Page | 799 800 801 802 803 804 805 806 807 808 809 810  | Next Page >

  • Stumped by "The remote server returned an error: (403) Forbidden" with WCF Service in https

    - by RJ
    I have a WCF Service that I have boiled down to next to nothing because of this error. It is driving me up the wall. Here's what I have now. A very simple WCF service with one method that returns a string with the value, "test". A very simple Web app that uses the service and puts the value of the string into a label. A web server running IIS 6 on Win 2003 with a SSL certificate. Other WCF services on the same server that work. I publish the WCF service to it's https location I run the web app in debug mode in VS and it works perfectly. I publish the web app to it's https location on the same server the WCF service resides under the same SSL certificate I get, "The remote server returned an error: (403) Forbidden" I have changed almost every setting in IIS as well as the WCF and Web apps to no avail. I have compared setting in the WCF services that work and everything is the same. Below are the setting in the web.config for the WCF Service and the WEB app: It appears the problem has to do with the Web app but I am out of ideas. Any ideas: WCF Service: <system.serviceModel> <bindings> <client /> <services> <service behaviorConfiguration="Ucf.Smtp.Wcf.SmtpServiceBehavior" name="Ucf.Smtp.Wcf.SmtpService"> <host> <baseAddresses> <add baseAddress="https://test.net.ucf.edu/webservices/Smtp/" /> </baseAddresses> </host> <endpoint address="" binding="wsHttpBinding" contract="Ucf.Smtp.Wcf.ISmtpService" bindingConfiguration="SSLBinding"> <identity> <dns value="localhost"/> </identity> </endpoint> <endpoint address="mex" binding="mexHttpsBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="Ucf.Smtp.Wcf.SmtpServiceBehavior"> <serviceMetadata httpsGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" httpsHelpPageEnabled="True"/> </behavior> </serviceBehaviors> </behaviors> Web App: <system.serviceModel> <bindings><wsHttpBinding> <binding name="WSHttpBinding_ISmtpService" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Transport"> <transport clientCredentialType="None" proxyCredentialType="None" realm="" /> <message clientCredentialType="Windows" negotiateServiceCredential="true" establishSecurityContext="true" /> </security> </binding> <client> <endpoint address="https://net228.net.ucf.edu/webservices/smtp/SmtpService.svc" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_ISmtpService" contract="SmtpService.ISmtpService" name="WSHttpBinding_ISmtpService"> <identity> <dns value="localhost" /> </identity> </client> </system.serviceModel>

    Read the article

  • Android JSON HttpClient to send data to PHP server with HttpResponse

    - by Scoobler
    I am currently trying to send some data from and Android application to a php server (both are controlled by me). There is alot of data collected on a form in the app, this is written to the database. This all works. In my main code, firstly I create a JSONObject (I have cut it down here for this example): JSONObject j = new JSONObject(); j.put("engineer", "me"); j.put("date", "today"); j.put("fuel", "full"); j.put("car", "mine"); j.put("distance", "miles"); Next I pass the object over for sending, and receive the response: String url = "http://www.server.com/thisfile.php"; HttpResponse re = HTTPPoster.doPost(url, j); String temp = EntityUtils.toString(re.getEntity()); if (temp.compareTo("SUCCESS")==0) { Toast.makeText(this, "Sending complete!", Toast.LENGTH_LONG).show(); } The HTTPPoster class: public static HttpResponse doPost(String url, JSONObject c) throws ClientProtocolException, IOException { HttpClient httpclient = new DefaultHttpClient(); HttpPost request = new HttpPost(url); HttpEntity entity; StringEntity s = new StringEntity(c.toString()); s.setContentEncoding(new BasicHeader(HTTP.CONTENT_TYPE, "application/json")); entity = s; request.setEntity(entity); HttpResponse response; response = httpclient.execute(request); return response; } This gets a response, but the server is returning a 403 - Forbidden response. I have tried changing the doPost function a little (this is actually a little better, as I said I have alot to send, basically 3 of the same form with different data - so I create 3 JSONObjects, one for each form entry - the entries come from the DB instead of the static example I am using). Firstly I changed the call over a bit: String url = "http://www.orsas.com/ServiceMatalan.php"; Map<String, String> kvPairs = new HashMap<String, String>(); kvPairs.put("vehicle", j.toString()); // Normally I would pass two more JSONObjects..... HttpResponse re = HTTPPoster.doPost(url, kvPairs); String temp = EntityUtils.toString(re.getEntity()); if (temp.compareTo("SUCCESS")==0) { Toast.makeText(this, "Sending complete!", Toast.LENGTH_LONG).show(); } Ok so the changes to the doPost function: public static HttpResponse doPost(String url, Map<String, String> kvPairs) throws ClientProtocolException, IOException { HttpClient httpclient = new DefaultHttpClient(); HttpPost httppost = new HttpPost(url); if (kvPairs != null && kvPairs.isEmpty() == false) { List<NameValuePair> nameValuePairs = new ArrayList<NameValuePair>(kvPairs.size()); String k, v; Iterator<String> itKeys = kvPairs.keySet().iterator(); while (itKeys.hasNext()) { k = itKeys.next(); v = kvPairs.get(k); nameValuePairs.add(new BasicNameValuePair(k, v)); } httppost.setEntity(new UrlEncodedFormEntity(nameValuePairs)); } HttpResponse response; response = httpclient.execute(httppost); return response; } Ok So this returns a response 200 int statusCode = re.getStatusLine().getStatusCode(); However the data received on the server cannot be parsed to a JSON string. It is badly formatted I think (this is the first time I have used JSON): If in the php file I do an echo on $_POST['vehicle'] I get the following: {\"date\":\"today\",\"engineer\":\"me\"} Can anyone tell me where I am going wrong, or if there is a better way to achieve what I am trying to do? Hopefully the above makes sense!

    Read the article

  • Sharepoint/WSS Reporting Services Integration woes

    - by mhollers
    after a number of failed attempts i seem to have successfully installed the Reporting services add-in to my WSS farm. However, I seem to be missing most of the enhanced functionality eg no report library template, no report center site template. the only additional functionality available is the report viewer web part. background: 2 server WSS 3.0 farm with CA (Central admin) WFE (web front end) and reporting services addin installed on 1, and SQL05 SP2 with Reporting services (RS) and all databases installed on other. I have a VM environment set up and have rolled this back and repeated a number of times. I have configured RS within CA and activated 'Report Server INtegration Feature'. Within the 'site settings' I have a 'Reporting Services' heading with a 'manage shared schedules' item/link, not sure if there should be other options? I was of the understanding that to view reports within sharepoint i could either create a new site using the 'report center' template or add a report library to an existing site, neither of which seems available I am at a loss as to what to do, as all online information seems to do with dealing with installation issues/errors, which i seem to have eventually got past

    Read the article

  • Help catching AV with WinDbg and ADPlus 7.0

    - by Stoune
    I want to catch Memory Access Violation in SQL Server Compact Edition like this described at http://debuggingblog.com/wp/2009/02/18/memory-access-violation-in-sql-server-compact-editionce/ The suggested config is: CRASH Quiet MyApp.exe NoDumpOnFirstChance clr;av FullDump gn I download latest Debugging Tools and observe what Microsoft rewrite adplus tool into managed code and change syntax of config File. I rewrite config file like this: <ADPlus Version="2"> <Settings> <RunMode>Crash</RunMode> <Option>Quiet</Option> <Option>NoDumpOnFirst</Option> <Sympath>c:\symbols\</Sympath> <OutputDir>c:\work\output\</OutputDir> <ProcessName>c:\work\app\output\MyApp.exe</ProcessName> </Settings> <Exceptions><!--to get the full dump on clr access violation--> <Exception Code="clr;av"> <Actions1>FullDump</Actions1> <ReturnAction1>gn</ReturnAction1> </Exception> </Exceptions> </ADPlus> And I get error "Couldn't find exception with code: clr;av". If I understand right It didn't load sos extension, but I can't find the right section and syntax that I should use to load it. adplus_old.vbs - for some reasons didn't launch process on Windows 7. WinDBG 6.12.0002.633 X86 ADPlus Engine Version: 7.01.002 02/27/2009 Maybe someone has a working example of config of debugging .NET app with latest adplus.exe?

    Read the article

  • Unicode Collations problem ?

    - by Bayonian
    (.NET 3.5 SP1, VS 2008, VB.NET, MSSQL Server 2008) I'm writing a small web app to test the Khmer Unicode and Lao Unicode. I have a table that store text in Khmer Unicode with the following structure : [t_id] [int] IDENTITY(1,1) NOT NULL [t_chid] [int] NOT NULL [t_vn] [int] NOT NULL [t_v] [nvarchar](max) NOT NULL I can use Linq to SQL to do CRUD normally. The text display properly on the web page, even though I didn't change the default collation of MSSQL Server 2008. When it comes to search the column [t_v], the page will take a very long time to load and in fact, it loads every row of that column. It never compares with the "key word" criteria that I use for the search. Here's my query for the search : Public Shared Function SearchTestingKhmerTable(ByVal keyword As String) As DataTable Dim db As New BibleDataClassesDataContext() Dim query = From b In db.khmer_books _ From ch In db.khmer_chapters _ From v In db.testing_khmers _ Where v.t_v.Contains(keyword) And ch.kh_book_id = b.kh_b_id And v.t_chid = ch.kh_ch_id _ Select b.kh_b_id, b.kh_b_title, ch.kh_ch_id, ch.kh_ch_number, v.t_id, v.t_vn, v.t_v Dim dtDataTableOne = New DataTable("dtOne") dtDataTableOne.Columns.Add("bid", GetType(Integer)) dtDataTableOne.Columns.Add("btitle", GetType(String)) dtDataTableOne.Columns.Add("chid", GetType(Integer)) dtDataTableOne.Columns.Add("chn", GetType(Integer)) dtDataTableOne.Columns.Add("vid", GetType(Integer)) dtDataTableOne.Columns.Add("vn", GetType(Integer)) dtDataTableOne.Columns.Add("verse", GetType(String)) For Each r In query dtDataTableOne.Rows.Add(New Object() {r.kh_b_id, r.kh_b_title, r.kh_ch_id, r.kh_ch_number, r.t_id, r.t_vn, r.t_v}) Next Return dtDataTableOne End Function Please note that I use the exact same code and database design with Lao Unicode and it works just fine. I get the returned query as expected for the search. I can't figure out what the problem with searching for query in Khmer table.

    Read the article

  • Iterating arrays in a batch file

    - by dboarman-FissureStudios
    I am writing a batch file (I asked a question on SU) to iterate over terminal servers searching for a specific user. So, I got the basic start of what I'm trying to do. Enter a user name Iterate terminal servers Display servers where user is found (they can be found on multiple servers now and again depending on how the connection is lost) Display a menu of options Iterating terminal servers I have: for /f "tokens=1" %%Q in ('query termserver') do (set __TermServers.%%Q) Now, I am getting the error... Environment variable __TermServers.SERVER1 not defined ...for each of the terminal servers. This is really the only thing in my batch file at this point. Any idea on why this error is occurring? Obviously, the variable is not defined, but I understood the SET command to do just that. I'm also thinking that in order to continue working on the iteration (each terminal server), I will need to do something like: :Search for /f "tokens=1" %%Q in ('query termserver') do (call Process) goto Break :Process for /f "tokens=1" %%U in ('query user %%username%% /server:%%Q') do (set __UserConnection = %%C) goto Search However, there are 2 things that bug me about this: Is the %%Q value still alive when calling Process? When I goto Search, will the for-loop be starting over? I'm doing this with the tools I have at my disposal, so as much as I'd like to hear about PowerShell and other ways to do this, it would be futile. I have notepad and that's it. Note: I would continue this line of questions on SuperUser, except that it seems to be getting more into programming specifics.

    Read the article

  • RSA C# Encrypt Java Decrypt

    - by user353030
    Hi guys, In my program (server side - Java) I've created keystore file, with command: keytool -genkey -alias myalias -keyalg RSA -validity 10000 -keystore my.keystore and exported related X509 certificate with: keytool -export -alias myalias -file cert.cer -keystore my.keystore After I saved cert.cer on client side (C#) and I write this code: X509Certificate2 x509 = new X509Certificate2(); byte[] rawData = ReadFile("mycert.cer"); x509.Import(rawData); RSACryptoServiceProvider rsa = (RSACryptoServiceProvider)x509.PublicKey.Key; byte[] plainbytes = System.Text.Encoding.ASCII.GetBytes("My Secret"); byte[] cipherbytes = rsa.Encrypt(plainbytes, true); String cipherHex = convertToHex(cipherContent); byte[] byteArray = encoding.GetBytes(cipherHex); .... I write this Java code on server side: keyStore = KeyStore.getInstance(KeyStore.getDefaultType()); keyStore.load(new FileInputStream("C:\\my.keystore"), "mypass".toCharArray()); Key key = keyStore.getKey("myalias", "mypass".toCharArray()); if (key instanceof PrivateKey) { Certificate cert = keyStore.getCertificate("myalias"); PublicKey pubKey = cert.getPublicKey(); privKey = (PrivateKey)key; } byte[] toDecodeBytes = new BigInteger(encodeMessageHex, 16).toByteArray(); Cipher decCipher = Cipher.getInstance("RSA"); decCipher.init(Cipher.DECRYPT_MODE, privKey); byte[] decodeMessageBytes = decCipher.doFinal(toDecodeBytes); String decodeMessageString = new String(decodeMessageBytes); I receive this error: javax.crypto.BadPaddingException: Data must start with zero Can you help me, please? Thanks thanks,

    Read the article

  • Conversion failed when converting datetime from character string. Linq To SQL & OpenXML

    - by chobo2
    Hi I been following this tutorial on how to do a linq to sql batch insert. http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx However I have a datetime field in my database and I keep getting this error. System.Data.SqlClient.SqlException was unhandled Message="Conversion failed when converting datetime from character string." Source=".Net SqlClient Data Provider" ErrorCode=-2146232060 Class=16 LineNumber=7 Number=241 Procedure="spTEST_InsertXMLTEST_TEST" Server="" State=1 StackTrace: at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) I am not sure why when I just take the datetime in the generated xml file and manually copy it into sql server 2005 it has no problem with it and converts it just fine. This is my SP CREATE PROCEDURE [dbo].[spTEST_InsertXMLTEST_TEST](@UpdatedProdData nText) AS DECLARE @hDoc int exec sp_xml_preparedocument @hDoc OUTPUT,@UpdatedProdData INSERT INTO UserTable(CreateDate) SELECT XMLProdTable.CreateDate FROM OPENXML(@hDoc, 'ArrayOfUserTable/UserTable', 2) WITH ( CreateDate datetime ) XMLProdTable EXEC sp_xml_removedocument @hDoc C# code using (TestDataContext db = new TestDataContext()) { UserTable[] testRecords = new UserTable[1]; for (int count = 0; count < 1; count++) { UserTable testRecord = new UserTable() { CreateDate = DateTime.Now }; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(UserTable[])); serializer.Serialize(sWriter, testRecords); db.spTEST_InsertXMLTEST_TEST(sBuilder.ToString()); } Rendered XML Doc <?xml version="1.0" encoding="utf-16"?> <ArrayOfUserTable xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <UserTable> <CreateDate>2010-05-19T19:35:54.9339251-07:00</CreateDate> </UserTable> </ArrayOfUserTable>

    Read the article

  • how could installations/configurations be easier in linux?

    - by ajsie
    although you can do anything in linux it tends to require a lot of tweaking in config files and reading a lot of manuals/tutorials before you can have it running in your way. i know that it gets a lot easier by time, and the apt-get installations with ubuntu/debian is heading the right way. but how can linux be more userfriendly for us in the future? i thought that if more is automated like an IDE environment, eg. typing svn will give us all the commands and description about each command when you move between commands with your keyboard. that would be great. but that's just one example. another is the navigation in the terminal between folders. now you have to type a lot just to jump from/to different folders. would be great with some more automatization here too. i know that these extra features will slow down the server, but its 2010 now, and these features are not that heavy for the cpu, but makes it more userfriendly and encourage maintainance of a server, not frighten u off. what do you think about this? should/could we have more user friendly linux environment in servers, something that has annoyed you a lot? a lot of things are done in the unix way, but maybe we should reinvent the wheel in some areas, cause apparently, its so...repeatingly today and difficult to do easy tasks. it should be easier i think..

    Read the article

  • Pass table as parameter to SQLCLR TV-UDF

    - by Skeolan
    We have a third-party DLL that can operate on a DataTable of source information and generate some useful values, and we're trying to hook it up through SQLCLR to be callable as a table-valued UDF in SQL Server 2008. Taking the concept here one step further, I would like to program a CLR Table-Valued Function that operates on a table of source data from the DB. I'm pretty sure I understand what needs to happen on the T-SQL side of things; but, what should the method signature look like in the .NET (C#) code? What would be the parameter datatype for "table data from SQL Server?" e.g. /* Setup */ CREATE TYPE InTableType AS TABLE (LocationName VARCHAR(50), Lat FLOAT, Lon FLOAT) GO CREATE TYPE OutTableType AS TABLE (LocationName VARCHAR(50), NeighborName VARCHAR(50), Distance FLOAT) GO CREATE ASSEMBLY myCLRAssembly FROM 'D:\assemblies\myCLR_UDFs.dll' WITH PERMISSION_SET = EXTERNAL_ACCESS GO CREATE FUNCTION GetDistances(@locations InTableType) RETURNS OutTableType AS EXTERNAL NAME myCLRAssembly.GeoDistance.SQLCLRInitMethod GO /* Execution */ DECLARE @myTable InTableType INSERT INTO @myTable(LocationName, Lat, Lon) VALUES('aaa', -50.0, -20.0) INSERT INTO @myTable(LocationName, Lat, Lon) VALUES('bbb', -20.0, -50.0) SELECT * FROM @myTable DECLARE @myResult OutTableType INSERT INTO @myResult MyCLRTVFunction @myTable --returns a table result calculated using the input The lat/lon - distance thing is a silly example that should of course be better handled entirely in SQL; but I hope it illustrates the general intent of table-in - table-out through a table-valued UDF tied to a SQLCLR assembly. I am not certain this is possible; what would the SQLCLRInitMethod method signature look like in the C#? public class GeoDistance { [SqlFunction(FillRowMethodName = "FillRow")] public static IEnumerable SQLCLRInitMethod(<appropriateType> myInputData) { //... } public static void FillRow(...) { //... } } If it's not possible, I know I can use a "context connection=true" SQL connection within the C# code to have the CLR component query for the necessary data given the relevant keys; but that's sensitive to changes in the DB schema. So I hope to just have SQL bundle up all the source data and pass it to the function. Bonus question - assuming this works at all, would it also work with more than one input table?

    Read the article

  • NSIS takes ownership of IIS system files

    - by Lucas
    I recently encountered an issue with NSIS that I believe is related to an interaction with UAC, but I am at a loss to explain it and I do not know how to prevent it in the future. I have an installer that creates and removes IIS virtual directories using the NsisIIS plugin. The installer appeared worked correctly on my Windows 7 workstation. When the installer was run on a Windows 2008 R2 server it installed properly, but the uninstaller removed all of the virtual directories and put IIS is an unusable state; to the point that I had to remove the Default Web Site and re-add it. What I eventually found was that all of the IIS configuration files under C:\Windows\System32\inetsrv\config had a lock icon on them. Some investigation seem to indicate that this means a user account has taken ownership of the file, however all the files listed SYSTEM as the file owner. I did check a different server that I have not run the installer on, and it does not have the lock icon applied to the IIS files. I have also seen the same lock icon appear on other files that the NSIS installer creates. For instance, I have a Web.Config.tpl file that is processed using the NSIS ReplaceInFile which also appears with the lock icon after the installer finished. After I explicitly grant another user account access to the file, the lock icon goes away. I run the installer under the local Administrator account on the 2008 R2 server, so I do not get the UAC prompt. Here is the relevant code from the install.nsi file RequestExecutionLevel admin Section "Application" APP_SECTION SectionIn RO Call InstallApp SectionEnd Section "un.Uninstaller Section" Delete "$PROGRAMFILES\${PROGRAMFILESDIR}\Uninstall.exe" Call un.InstallApp SectionEnd Function InstallApp File /oname=Web.Config Web.Config.tpl !insertmacro ReplaceInFile Web.Config %CONNECTION_STRING% $CONNECTION_STRING FunctionEnd Function un.InstallApp ReadRegStr $0 HKLM "Software\${REGKEY}" "VirtualDir" NsisIIS::DeleteVDir "$0" Pop $0 FunctionEnd I have three questions stemming from this incident: How did this happen? How can I fix my installer to prevent it from happening again? How can I repair the permissions on the IIS config files.

    Read the article

  • Setting up a "cookieless domain" to improve site performance

    - by Django Reinhardt
    I was reading in Google's documentation about improving site speed. One of their recommendations is serving static content (images, css, js, etc.) from a "cookieless domain": Static content, such as images, JS and CSS files, don't need to be accompanied by cookies, as there is no user interaction with these resources. You can decrease request latency by serving static resources from a domain that doesn't serve cookies. Google then says that the best way to do this is to buy a new domain and set it to point to your current one: To reserve a cookieless domain for serving static content, register a new domain name and configure your DNS database with a CNAME record that points the new domain to your existing domain A record. Configure your web server to serve static resources from the new domain, and do not allow any cookies to be set anywhere on this domain. In your web pages, reference the domain name in the URLs for the static resources. This is pretty straight forward stuff, except for the bit where it says to "configure your web server to serve static resources from the new domain, and do not allow any cookies to be set anywhere on this domain". From what I've read, there's no setting in IIS that allows you to say "serve static resources", so how do I prevent ASP.NET from setting cookies on this new domain? At present, even if I'm just requesting a .jpg from the new domain, it sets a cookie on my browser, even though our application's cookies are set to our old domain. For example, ASP.NET sets an ".ASPXANONYMOUS" cookie that (as far as I'm aware) we're not telling it to do. Apologies if this is a real newb question, I'm new at this! Thanks.

    Read the article

  • SSIS package not picking up configuration files correctly

    - by blntechie
    We have recently migrated some 30 DTS packages in SQL Server 2000 to SSIS packages in SQL Server 2008. We have created packages in such a way that all environment related variables and other required information are picked from configuration files for maintenance of packages across different environments like Dev, QA and Prod. After setting up all the packages with the config files, when we tested the packages from Business Intelligence Development Studio, it worked fine and picked up the values from the configuration file. And when changing the values in config files to Dev or other env it correctly picked up the values and executed. Similarly, tried for 2 different environments and the packages worked fine. So we deployed to Prod and it was working fine. Yesterday, I had to make a functional change for one package and so I made the change in the package (it is just changing a parameter in a SQL procedure execution task and not related to any variables) and tested in BIDS with 2 environments and it worked fine. As the change was not related to any environment change, we deployed only the updated package (not the associated config file) manually in Prod (i.e without the use of manifest). The config file which was used by the package previously and working fine in Prod remained unaltered. But when the package was executed, the package was pointing to QA and the package didn't read from the config file I believe. One reason may be, it is still using the last executed values which remains in the .dtsx file(can be checked by opening the file in a text editor) usually. But normally, when a package is executed, the values will be overwritten from config file. Guess it is not happening. What are the possible reasons for this? We have tested extensively switching between test environments and it does not show this behavior. We have encountered this in Prod environment twice now. Anyone else have experienced this and how have you resolved this?

    Read the article

  • PerformancePoint dashboard permissions problem in MOSS

    - by Nathan DeWitt
    I have a PerformancePoint dashboard running in MOSS 2007 portal. The dashboard consists of one SSRS 2005 report, running in SharePoint Integrated mode. NT Authority\Authenticated Users have read permissions to the report library containing the SSRS report, the dashboard, and the report library containing the dashboard. Users that attempt to access the dashboard receive the following error message: The permissions granted to user 'DOMAIN\firstname.lastname' are insufficient for performing this operation. (rsAccessDenied) Users that then click on the direct link to the report in MOSS will see the report with no problem. Subsequent visits to the dashboard show the report with no problem. The report is using a data source that is located one folder up from the report location. The report has been updated to point to the correct shared data source after deployment. Both the report and the data source have been published. The data source is using stored credentials, with a domain service account that has been set to Use as Windows credentials. This service account is serving other reports in other areas with no problem. Edit: Ok, I've gotten a lot more information on this problem. The request is never actually being made to the data source. The user comes in to the dashboard and requests a report for the first time using their kerberos token identifying themselves. The report looks in the Report Server database and finds that they are not listed in the users table and generates this rsAccessDenied error. Once they view the report directly their name is in this table and they never have the problem again. Unfortunately, removing the user from the Users table in the RS database doesn't actually cause this error to happen again. Everything I've read says that when you run a Report Server in MOSS integrated mode all your permissions are handled at the MOSS report library level, and all Auth users have permissions to the report library, as stated earlier. Any ideas?

    Read the article

  • Reporting System architecture for better performance

    - by pauloya
    Hi, We have a product that runs Sql Server Express 2005 and uses mainly ASP.NET. The database has around 200 tables, with a few (4 or 5) that can grow from 300 to 5000 rows per day and keep a history of 5 years, so they can grow to have 10 million rows. We have built a reporting platform, that allows customers to build reports based on templates, fields and filters. We face performance problems almost since the beginning, we try to keep reports display under 10 seconds but some of them go up to 25 seconds (specially on those customers with long history). We keep checking indexes and trying to improve the queries but we get the feeling that there's only so much we can do. Off course the fact that the queries are generated dynamically doesn't help with the optimization. We also added a few tables that keep redundant data, but then we have the added problem of maintaining this data up to date, and also Sql Express has a limit on the size of databases. We are now facing a point where we have to decide if we want to give up real time reports, or maybe cut the history to be able to have better performance. I would like to ask what is the recommended approach for this kind of systems. Also, should we start looking for third party tools/platforms? I know OLAP can be an option but can we make it work on Sql Server Express, or at least with a license that is cheap enough to distribute to thousands of deployments? Thanks

    Read the article

  • Force creation of query execution plan

    - by Marc
    I have the following situation: .net 3.5 WinForm client app accessing SQL Server 2008 Some queries returning relatively big amount of data are used quite often by a form Users are using local SQL Express and restarting their machines at least daily Other users are working remotely over slow network connections The problem is that after a restart, the first time users open this form the queries are extremely slow and take more or less 15s on a fast machine to execute. Afterwards the same queries take only 3s. Of course this comes from the fact that no data is cached and must be loaded from disk first. My question: Would it be possible to force the loading of the required data in advance into SQL Server cache? Note My first idea was to execute the queries in a background worker when the application starts, so that when the user starts the form the queries will already be cached and execute fast directly. I however don't want to load the result of the queries over to the client as some users are working remotely or have otherwise slow networks. So I thought just executing the queries from a stored procedure and putting the results into temporary tables so that nothing would be returned. Turned out that some of the result sets are using dynamic columns so I couldn't create the corresponding temp tables and thus this isn't a solution. Do you happen to have any other idea?

    Read the article

  • Do connection string DNS lookups get cached?

    - by joshcomley
    Suppose the following: I have a database set up on database.mywebsite.com, which resolves to IP 111.111.1.1, running from a local DNS server on our network. I have countless ASP, ASP.NET and WinForms applications that use a connection string utilising database.mywebsite.com as the server name, all running from the internal network. Then the box running the database dies, and I switch over to a new box with an IP of 222.222.2.2. So, I update the DNS for database.mywebsite.com to point to 222.222.2.2. Will all the applications and computers running them have cached the old resolved IP address? I'm assuming they will have. Any suggestions along the lines of "don't have your IP change each time you switch box" are not too welcome as I cannot control this aspect of the situation, unfortunately. We are currently using the machine name of the box, which changes every time it dies and all apps etc. have to be updated with the new machine name. It hurts.

    Read the article

  • How do I do nested transactions in NHibernate?

    - by Gavin Schultz-Ohkubo
    Can I do nested transactions in NHibernate, and how do I implement them? I'm using SQL Server 2008, so support is definitely in the DBMS. I find that if I try something like this: using (var outerTX = UnitOfWork.Current.BeginTransaction()) { using (var nestedTX = UnitOfWork.Current.BeginTransaction()) { ... do stuff nestedTX.Commit(); } outerTX.Commit(); } then by the time it comes to outerTX.Commit() the transaction has become inactive, and results in a ObjectDisposedException on the session AdoTransaction. Are we therefore supposed to create nested NHibernate sessions instead? Or is there some other class we should use to wrap around the transactions (I've heard of TransactionScope, but I'm not sure what that is)? I'm now using Ayende's UnitOfWork implementation (thanks Sneal). Forgive any naivety in this question, I'm still new to NHibernate. Thanks! EDIT: I've discovered that you can use TransactionScope, such as: using (var transactionScope = new TransactionScope()) { using (var tx = UnitOfWork.Current.BeginTransaction()) { ... do stuff tx.Commit(); } using (var tx = UnitOfWork.Current.BeginTransaction()) { ... do stuff tx.Commit(); } transactionScope.Commit(); } However I'm not all that excited about this, as it locks us in to using SQL Server, and also I've found that if the database is remote then you have to worry about having MSDTC enabled... one more component to go wrong. Nested transactions are so useful and easy to do in SQL that I kind of assumed NHibernate would have some way of emulating the same...

    Read the article

  • CLR Stored Procedures

    - by Paul Hatcherian
    In an ASP.NET application, I have a small number of fairly complex, frequently used operations to execute against a database. In these operations, one or more of several tables needs updates or inserts based a logical evaluation of both input parameters and values of certain tables. I've maintained a separation of logic and data access, so the operation currently looks like this: Request received from client Business layer invokes data layer to retrieve data from database Business layer processes result and determines which operation to execute Business layer invokes appropriate data operation Response sent to client As you can see, the client is kept waiting while two separate requests are made to the database. In searching for a solution to this, I've found CLR Stored Procedures, but I'm not sure if I have the right idea about what they are useful for. I have written a replacement for the code above which especially places steps 2-4 in a CLR SP. My understanding is that the SP will be executed locally by SQL Server and result in only one call being made to the server. My initial benchmark tests show this is actually orders of magnitude slower than my original code, but I attribute that recompilation of the code I have not worked out yet and/or some flaw in my environment. My question is basically, is this the intended use of CLR SPs or am I missing something? I realize this is a bit of a compromise structurally, so if there's a better way to do it I'd love to hear it.

    Read the article

  • sql exception when transferring project from usb to c:\

    - by jello
    I'm working on a C# windows program with Visual Studio 2008. Usually, I work from school, directly on my usb drive. But when I copy the folder on my hard drive at home, an sql exception is unhandled whenever I try to write to the database. it is unhandled at the conn.Open(); line. here's the exception unhandled Database 'L:\system\project\the_project\the_project\bin\Debug\PatientMonitoringDatabase.mdf' already exists. Choose a different database name. Cannot attach the file 'C:\Documents and Settings\Administrator\My Documents\system\project\the_project\the_project\bin\Debug\PatientMonitoringDatabase.mdf' as database 'PatientMonitoringDatabase'. it's weird, because my connection string says |DataDirectory|, so it should work on any drive... here's my connection string: string connStr = "Data Source=.\\SQLEXPRESS;AttachDbFilename=|DataDirectory|\\PatientMonitoringDatabase.mdf; " + "Initial Catalog=PatientMonitoringDatabase; " + "Integrated Security=True"; Someone told me to: Connect to localhost with SQL Server Management Studio Express, and remove/detach the existing PatientMonitoringDatabase database. Whether it's a persistent database or only active within a running application, you can't have 2 databases with the same name at the same time attached to a SQL Server instance. So I did that, and now it gives me: Directory lookup for the file "C:\Documents and Settings\Administrator\My Documents\system\project\the_project\the_project\bin\Debug\PatientMonitoringDatabase.mdf" failed with the operating system error 5(Access is denied.). Cannot attach the file 'C:\Documents and Settings\Administrator\My Documents\system\project\the_project\the_project\bin\Debug\PatientMonitoringDatabase.mdf' as database 'PatientMonitoringDatabase' I checked the files' properties, and I have allow for everyone. Does anyone know what's going on here?

    Read the article

  • Creating and publishing exel file in MOSS 2007 using data from SQL sever.

    - by Diomos
    Hello, I need help in this matter: We have a template of exel file in which all calculations are already set. User can request a 'report'. Idea is to create a button on our site (SharePoint portal). After clicking on it a new exel file is generated. This means to get actual data from database (SQL server 2005 SP2), import them into template, let all calculations to generate proper data and then allow user to see this file. For now it's enough to publish final exel file in document library. I am quite new in WSS 3.0 and MOSS 2007 and I need some advice in what can be the best solution. Looks like a quite complex task for me. Is there some direct way how to accomplish this? Or maybe I need one tool to get data from database and to import this data into exel file (SSRS?) and other tool to publish it in document library (MOSS7 Exel services?). I heard something about PerformancePoint Server 2007, is this a way to follow? Thanks forward for any advice!

    Read the article

  • Python Socket Getting Connection Reset

    - by Ian
    I created a threaded socket listener that stores newly accepted connections in a queue. The socket threads then read from the queue and respond. For some reason, when doing benchmarking with 'ab' (apache benchmark) using a concurrency of 2 or more, I always get a connection reset before it's able to complete the benchmark (this is taking place locally, so there's no external connection issue). class server: _ip = '' _port = 8888 def __init__(self, ip=None, port=None): if ip is not None: self._ip = ip if port is not None: self._port = port self.server_listener(self._ip, self._port) def now(self): return time.ctime(time.time()) def http_responder(self, conn, addr): httpobj = http_builder() httpobj.header('HTTP/1.1 200 OK') httpobj.header('Content-Type: text/html; charset=UTF-8') httpobj.header('Connection: close') httpobj.body("Everything looks good") data = httpobj.generate() sent = conn.sendall(data) def http_thread(self, id): self.log("THREAD %d: Starting Up..." % id) while True: conn, addr = self.q.get() ip, port = addr self.log("THREAD %d: responding to request: %s:%s - %s" % (id, ip, port, self.now())) self.http_responder(conn, addr) self.q.task_done() conn.close() def server_listener(self, host, port): self.q = Queue.Queue(0) sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.bind( (host, port) ) sock.listen(5) for i in xrange(4): #thread count thread.start_new(self.http_thread, (i+1, )) while True: self.q.put(sock.accept()) sock.close() server('', 9999) When running the benchmark, I get totally random numbers of good requests before it errors out, usually between 4 and 500. Edit: Took me a while to figure it out, but the problem was in sock.listen(5). Because I was using apache benchmark with a higher concurrency (5 and up) it was causing the backlog of connections to pile up, at which point the connections started getting dropped by the socket.

    Read the article

  • Can I spead out a long running stored proc accross multiple CPU's?

    - by Russ
    [Also on SuperUser - http://superuser.com/questions/116600/can-i-spead-out-a-long-running-stored-proc-accross-multiple-cpus] I have a stored procedure in SQL server the gets, and decrypts a block of data. ( Credit cards in this case. ) Most of the time, the performance is tolerable, but there are a couple customers where the process is painfully slow, taking literally 1 minute to complete. ( Well, 59377ms to return from SQL Server to be exact, but it can vary by a few hundred ms based on load ) When I watch the process, I see that SQL is only using a single proc to perform the whole process, and typically only proc 0. Is there a way I can change my stored proc so that SQL can multi-thread the process? Is it even feasible to cheat and to break the calls in half, ( top 50%, bottom 50% ), and spread the load, as a gross hack? ( just spit-balling here ) My stored proc: USE [Commerce] GO /****** Object: StoredProcedure [dbo].[GetAllCreditCardsByCustomerId] Script Date: 03/05/2010 11:50:14 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[GetAllCreditCardsByCustomerId] @companyId UNIQUEIDENTIFIER, @DecryptionKey NVARCHAR (MAX) AS SET NoCount ON DECLARE @cardId uniqueidentifier DECLARE @tmpdecryptedCardData VarChar(MAX); DECLARE @decryptedCardData VarChar(MAX); DECLARE @tmpTable as Table ( CardId uniqueidentifier, DecryptedCard NVarChar(Max) ) DECLARE creditCards CURSOR FAST_FORWARD READ_ONLY FOR Select cardId from CreditCards where companyId = @companyId and Active=1 order by addedBy desc --2 OPEN creditCards --3 FETCH creditCards INTO @cardId -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN --OPEN creditCards DECLARE creditCardData CURSOR FAST_FORWARD READ_ONLY FOR select convert(nvarchar(max), DecryptByCert(Cert_Id('Oh-Nay-Nay'), EncryptedCard, @DecryptionKey)) FROM CreditCardData where cardid = @cardId order by valueOrder OPEN creditCardData FETCH creditCardData INTO @tmpdecryptedCardData -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN print 'CreditCardData' print @tmpdecryptedCardData set @decryptedCardData = ISNULL(@decryptedCardData, '') + @tmpdecryptedCardData print '@decryptedCardData' print @decryptedCardData; FETCH NEXT FROM creditCardData INTO @tmpdecryptedCardData -- fetch next END CLOSE creditCardData DEALLOCATE creditCardData insert into @tmpTable (CardId, DecryptedCard) values ( @cardId, @decryptedCardData ) set @decryptedCardData = '' FETCH NEXT FROM creditCards INTO @cardId -- fetch next END select CardId, DecryptedCard FROM @tmpTable CLOSE creditCards DEALLOCATE creditCards

    Read the article

  • ASP.NET application using old connection string.

    - by Doug S.
    I am trying to publish a website using ASP.NET MVC3 EF and CODEFIRST with a SQL Server 2008 backend. On my local machine I was using a sql express db for development, but now that I am pushing live, I want to use my hosted production database. The problem is that when I try to run the application, it is still using my local db connection string. I have completely removed the old connection string from my web.config file and am using the <clear /> tag before creating the new connection string. I have also cleaned the solution and rebuilt, but somehow it is still connecting to the old db. What am I missing? This is the new connection string: <connectionStrings> <clear /> <add name="CellularAutomataDBContext" connectionString=" Server=XXX; Database=CellularAutomata; User ID=XXX; Password=XXX; Trusted_Connection=False" providerName="System.Data.SqlClient" /> </connectionStrings>

    Read the article

  • TeamCity with TFS - workspace problems

    - by Tom
    Hi, We have been using CC.NET as our CI server for a month or so now, which has worked ok with TFS. In the config we were able to specify the TFS server, username, password, project and workspace which is all good. Now we are moving over to TeamCity mainly because it just seams more solid and is much nicer to use. The problem is getting it work with TFS. For the purpose of this, both the workspace and machine name are "BuildMachine", username is "BuildUser" TFS project is "$/Project/Dev/Website" I seam to have set it up correctly, I think, as when testing the connection it is successful. When I run a build I get a TFS error: "RunBuildException when running build stage UpdateSourcesFromServer." It goes on to say: "No matched workspaces were found. Will recreate workspace and perofming clean checkout." It then tries to create a new workspace something like this: TeamCity-S-sqa9qe2aulx22gz4rzkogl5kr/BuildUser It tries to set up some mappings and then fails because: "The working folder C:\ is already in use by the workspace BuildMachine;BuildUser on computer BuildMachine". This seams ok as this is the workspace that CC.net was using, and c:\project\dev\website is the path to the project. The problem is, why didn't TeamCity pick this up and use this workspace? Why does it try to create its own new one? Any idea how I can fix this? Thanks

    Read the article

< Previous Page | 799 800 801 802 803 804 805 806 807 808 809 810  | Next Page >