Search Results

Search found 19446 results on 778 pages for 'network printer'.

Page 719/778 | < Previous Page | 715 716 717 718 719 720 721 722 723 724 725 726  | Next Page >

  • How can I access mainframe data with .Net applications and SQL Queries?

    - by orandov
    We have a large amount of data stored on an IBM mainframe using VSAM files. A lot of this data is dropped on the network every night in the form of text files to be processed and dumped into FoxPro and SQL Server databases. There are also many text files produced nightly by custom applications that get uploaded to the mainframe to keep everything in sync. Keeping the everything in sync is very tricky, to say the least. We are not getting rid of the mainframe any time soon and we would like to replace all the nightly batch processing with real time access to the mainframe data. We would like to be able to: Read data directly from the mainframe and produce reports based on it. Possibly using SQL queries. Read and Write data from custom .Net applications. We are not looking for a new platform to interface with the mainframe like Information Builders offers. We don't want to build application modules or reports with new "Business Intelligence" tools. We already know how to generate reports and write custom applications using SQL,.Net, Visual Studio, etc. All we are looking for is some sort of adapter to connect to our mainframe data. Any ideas are appreciated.

    Read the article

  • HDFS some datanodes of cluster are suddenly disconnected while reducers are running

    - by user1429825
    I have 8 slave computers and 1 master computer for running Hadoop (ver 0.21) some datanodes of cluster are suddenly disconnected while I was running MapReduce code on 10GB data After all mappers finished and around 80% of reducers was processed, randomly one or more datanode disconned from network. and then the other datanodes start to disappear from network even if I killed the MapReduce job when I found some datanode was disconnected. I've tried to change dfs.datanode.max.xcievers to 4096, turned off fire-walls of all computing node, disabled selinux and increased the number of file open limit to 20000 but they didn't work at all... anyone have a idea to solve this problem? followings are error log from mapreduce 12/06/01 12:31:29 INFO mapreduce.Job: Task Id : attempt_201206011227_0001_r_000006_0, Status : FAILED java.io.IOException: Bad connect ack with firstBadLink as ***.***.***.148:20010 at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:889) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:820) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:427) and followings are logs from datanode 2012-06-01 13:01:01,118 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-5549263231281364844_3453 src: /*.*.*.147:56205 dest: /*.*.*.142:20010 2012-06-01 13:01:01,136 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020) Starting thread to transfer block blk_-3849519151985279385_5906 to *.*.*.147:20010 2012-06-01 13:01:19,135 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-5797481564121417802_3453 to *.*.*.146:20010 got java.net.ConnectException: > Connection timed out at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1257) at java.lang.Thread.run(Thread.java:722) 2012-06-01 13:06:20,342 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification succeeded for blk_6674438989226364081_3453 2012-06-01 13:09:01,781 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-3849519151985279385_5906 to *.*.*.147:20010 got java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/*.*.*.142:60057 remote=/*.*.*.147:20010] at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246) at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:164) at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:203) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:388) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:476) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1284) at java.lang.Thread.run(Thread.java:722) hdfs-site.xml <configuration> <property> <name>dfs.name.dir</name> <value>/home/hadoop/data/name</value> </property> <property> <name>dfs.data.dir</name> <value>/home/hadoop/data/hdfs1,/home/hadoop/data/hdfs2,/home/hadoop/data/hdfs3,/home/hadoop/data/hdfs4,/home/hadoop/data/hdfs5</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.datanode.max.xcievers</name> <value>4096</value> </property> <property> <name>dfs.http.address</name> <value>0.0.0.0:20070</value> <description>50070 The address and the base port where the dfs namenode web ui will listen on. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.http.address</name> <value>0.0.0.0:20075</value> <description>50075 The datanode http server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.secondary.http.address</name> <value>0.0.0.0:20090</value> <description>50090 The secondary namenode http server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.address</name> <value>0.0.0.0:20010</value> <description>50010 The address where the datanode server will listen to. If the port is 0 then the server will start on a free port. </description> <property> <name>dfs.datanode.ipc.address</name> <value>0.0.0.0:20020</value> <description>50020 The datanode ipc server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.https.address</name> <value>0.0.0.0:20475</value> </property> <property> <name>dfs.https.address</name> <value>0.0.0.0:20470</value> </property> </configuration> mapred-site.xml <configuration> <property> <name>mapred.job.tracker</name> <value>masternode:29001</value> </property> <property> <name>mapred.system.dir</name> <value>/home/hadoop/data/mapreduce/system</value> </property> <property> <name>mapred.local.dir</name> <value>/home/hadoop/data/mapreduce/local</value> </property> <property> <name>mapred.map.tasks</name> <value>32</value> <description> default number of map tasks per job.</description> </property> <property> <name>mapred.tasktracker.map.tasks.maximum</name> <value>4</value> </property> <property> <name>mapred.reduce.tasks</name> <value>8</value> <description> default number of reduce tasks per job.</description> </property> <property> <name>mapred.map.child.java.opts</name> <value>-Xmx2048M</value> </property> <property> <name>io.sort.mb</name> <value>500</value> </property> <property> <name>mapred.task.timeout</name> <value>1800000</value> <!-- 30 minutes --> </property> <property> <name>mapred.job.tracker.http.address</name> <value>0.0.0.0:20030</value> <description> 50030 The job tracker http server address and port the server will listen on. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>mapred.task.tracker.http.address</name> <value>0.0.0.0:20060</value> <description> 50060 </property> </configuration>

    Read the article

  • Write to a textfile using Javascript

    - by karikari
    Under Firefox, I want to do something like this : I have a .htm file, that has a button on it. This button, when I click it, the action will write a text inside a local .txt file. By the way, my .htm file is run locally too. I have tried multiple times using this code, but still cant make my .htm file write to my textfile: function save() { try { netscape.security.PrivilegeManager.enablePrivilege("UniversalXPConnect"); } catch (e) { alert("Permission to save file was denied."); } var file = Components.classes["@mozilla.org/file/local;1"] .createInstance(Components.interfaces.nsILocalFile); file.initWithPath( savefile ); if ( file.exists() == false ) { alert( "Creating file... " ); file.create( Components.interfaces.nsIFile.NORMAL_FILE_TYPE, 420 ); } var outputStream = Components.classes["@mozilla.org/network/file-output-stream;1"] .createInstance( Components.interfaces.nsIFileOutputStream ); outputStream.init( file, 0x04 | 0x08 | 0x20, 420, 0 ); var output = 'test test test test'; var result = outputStream.write( output, output.length ); outputStream.close(); } This part is for the button: <input type="button" value="write to file2" onClick="save();">

    Read the article

  • Redirect uploaded files to another server, using nginx

    - by Serg ikS
    I am creating a web service of scheduled posts to some soc. network.Need help dealing with file uploads under high traffic. Process overview: User uploads files to SomeServer (not mine). SomeServer then responds with a JSON string. My web app should store that JSON response. Opt. 1 — Save, cURL POST, delete tmp The stupid way I made it work: User uploads files to MyWebApp; MyWebApp cURL's the file further to SomeServer, getting the response. Opt.2 — JS magic The smart way it could be perfect: User uploads the file directly to SomeServer, from within an iFrame; MyWebApp gets the response through JavaScript. But this is(?) impossible due to the 'Same Origin Policy', isn't it? Opt. 3 — nginx proxying? The better way for a production server: User uploads files to MyWebApp; nginx intercepts the file uploads and sends them directly to the SomeServer; JSON response is also intercepted by nginx and processed by MyWebApp. Does this make any sense, and what would be the nginx config for, say, /fileupload Location to proxy it to SomeServer ?

    Read the article

  • Python: How can I use Twisted as the transport for SUDS?

    - by jathanism
    I have a project that is based on Twisted used to communicate with network devices and I am adding support for a new vendor (Citrix NetScaler) whose API is SOAP. Unfortunately the support for SOAP in Twisted still relies on SOAPpy, which is badly out of date. In fact as of this question (I just checked), twisted.web.soap itself hasn't even been updated in 21 months! I would like to ask if anyone has any experience they would be willing to share with utilizing Twisted's superb asynchronous transport functionality with SUDS. It seems like plugging in a custom Twisted transport would be a natural fit in SUDS' Client.options.transport, I'm just having a hard time wrapping my head around it. I did come up with a way to call the SOAP method with SUDS asynchronously by utilizing twisted.internet.threads.deferToThread(), but this feels like a hack to me. Here is an example of what I've done, to give you an idea: # netscaler is a module I wrote using suds to interface with NetScaler SOAP # Source: http://bitbucket.org/jathanism/netscaler-api/src import netscaler import os import sys from twisted.internet import reactor, defer, threads # netscaler.API is the class that sets up the suds.client.Client object host = 'netscaler.local' username = password = 'nsroot' wsdl_url = 'file://' + os.path.join(os.getcwd(), 'NSUserAdmin.wsdl') api = netscaler.API(host, username=username, password=password, wsdl_url=wsdl_url) results = [] errors = [] def handleResult(result): print '\tgot result: %s' % (result,) results.append(result) def handleError(err): sys.stderr.write('\tgot failure: %s' % (err,)) errors.append(err) # this converts the api.login() call to a Twisted thread. # api.login() should return True and is is equivalent to: # api.service.login(username=self.username, password=self.password) deferred = threads.deferToThread(api.login) deferred.addCallbacks(handleResult, handleError) reactor.run() This works as expected and defers return of the api.login() call until it is complete, instead of blocking. But as I said, it doesn't feel right. Thanks in advance for any help, guidance, feedback, criticism, insults, or total solutions.

    Read the article

  • How do I deploy my ASP MVC project to my Win7 system?

    - by MedicineMan
    Hi, I am deploying my first ASP MVC project. The project runs just fine, I would like to take the next step and run this outside of my visual studio environment on my local IIS. I am running Windows7, Visual Studio 2008, and I have created a basic ASP MVC project. On my solution, I find the project I would like to deploy. I right click and select Publish. I have backed up C:\inetpub\wwwroot\ and would like to deploy there. I accept all defaults, and click the "Publish" button. The Output Build window shows 1 project failed. Basically is says that it is unable to add any of the binaries to the site, copy files, create new directories... Access is denied. When I do click "Publish" at work, I don't get these errors. What do I have to do here to publish the website to make the website available to the rest of my home network? Also wwwroot appears to be readonly, but telling the folder to not be read only doesn't seem to help, it still appears to be readonly even after I've unselected this property in the property dialog.

    Read the article

  • Hibernate design to speed up querying of large dataset

    - by paddydub
    I currently have the below tables representing a bus network mapped in hibernate, accessed from a Spring MVC based bus route planner I'm trying to make my route planner application perform faster, I load all the above tables into Lists to perform the route planner logic. I would appreciate if anyone has any ideas of how to speed my performace Or any suggestions of another method to approach this problem of handling a large set of data Coordinate Connections Table (INT,INT,INT, DOUBLE)( Containing 50,000 Coordinate Connections) ID, FROMCOORDID, TOCOORDID, DISTANCE 1 1 2 0.383657 2 1 17 0.173201 3 1 63 0.258781 4 1 64 0.013726 5 1 65 0.459829 6 1 95 0.458769 Coordinate Table (INT,DECIMAL, DECIMAL) (Containing 4700 Coordinates) ID , LAT, LNG 0 59.352669 -7.264341 1 59.352669 -7.264341 2 59.350012 -7.260653 3 59.337585 -7.189798 4 59.339221 -7.193582 5 59.341408 -7.205888 Bus Stop Table (INT, INT, INT)(Containing 15000 Stops) StopID RouteID COORDINATEID 1000100001 100 17 1000100002 100 18 1000100003 100 19 1000100004 100 20 1000100005 100 21 1000100006 100 22 1000100007 100 23 This is how long it takes to load all the data from each table: stop.findAll = 148ms, stops.size: 15670 Hibernate: select coordinate0_.COORDINATEID as COORDINA1_2_, coordinate0_.LAT as LAT2_, coordinate0_.LNG as LNG2_ from COORDINATES coordinate0_ coord.findAll = 51ms , coordinates.size: 4704 Hibernate: select coordconne0_.COORDCONNECTIONID as COORDCON1_3_, coordconne0_.DISTANCE as DISTANCE3_, coordconne0_.FROMCOORDID as FROMCOOR3_3_, coordconne0_.TOCOORDID as TOCOORDID3_ from COORDCONNECTIONS coordconne0_ coordinateConnectionDao.findAll = 238ms ; coordConnectioninates.size:48132 Hibernate Annotations @Entity @Table(name = "STOPS") public class Stop implements Serializable { @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "STOPID") private int stopID; @Column(name = "ROUTEID", nullable = false) private int routeID; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "COORDINATEID", nullable = false) private Coordinate coordinate; } @Table(name = "COORDINATES") public class Coordinate { @Id @GeneratedValue @Column(name = "COORDINATEID") private int CoordinateID; @Column(name = "LAT") private double latitude; @Column(name = "LNG") private double longitude; } @Entity @Table(name = "COORDCONNECTIONS") public class CoordConnection { @Id @GeneratedValue @Column(name = "COORDCONNECTIONID") private int CoordinateID; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "FROMCOORDID", nullable = false) private Coordinate fromCoordID; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "TOCOORDID", nullable = false) private Coordinate toCoordID; @Column(name = "DISTANCE", nullable = false) private double distance; }

    Read the article

  • QT QSslError being signaled with the error code set to NoError

    - by Nantucket
    My Problem I compiled OpenSSL into QT to enable OpenSSL support. Everything appeared to go correctly in the compile. However, when I try to use the official HTTP example application that can be found here, everytime I try to download an https page, it will signal two QSslError, each with contents NoError. The types of QSslErrors, including NoError, are documented here, poorly. There is no explanation on why they even included an error type called NoError, or what it means. Bizarrely, the NoError error code seems to be true, as it downloads the remote https document perfectly even while signaling the error. Does anyone have any idea what this means and what could possibly be causing it? Optional Background Reading Here is the relevant part of the code from the example app (this is connected to the network connection's sslErrors signal by the constructor): void HttpWindow::sslErrors(QNetworkReply*,const QList<QSslError> &errors) { QString errorString; foreach (const QSslError &error, errors) { if (!errorString.isEmpty()) errorString += ", "; errorString += error.errorString(); } if (QMessageBox::warning(this, tr("HTTP"), tr("One or more SSL errors has occurred: %1").arg(errorString), QMessageBox::Ignore | QMessageBox::Abort) == QMessageBox::Ignore) { reply->ignoreSslErrors(); } } I have tried the old version of this example, and it produced the same result. I have tried OpenSSL 1.0.0a and 0.9.8o. I have tried tried compiling OpenSSL myself, I have tried using pre-compiled versions of OpenSSL from the net. All produce the same result. If this were my first time using QT with SSL, I would almost think this is the intended result (even though their example application is popping up error warning message windows), if not for the fact that last time I played with QT, using what would now be an old version of QT with an old version of SSL, I distinctly remember everything working fine with no error windows. My system is running Windows 7 x64.

    Read the article

  • How to identify the type of socket data?

    - by Nitesh Panchal
    Hello, May be i am not able to express my doubt properly in this question but still i will try. Basically i created a simple socket based chat program and everything works fine. But i think i have made many patches in it from the design point of view. I have used ObjectInputStream and ObjectOutputStreams in my program. The question i want to ask is how do i identify the different type of data that i send across the network? say if it is simple String type object i directly add to List<String> chatMessages. Now if want to ban certain users i created an another class :- public class User{ private String name; private String id; //getters and setters } This User class means no importance to me till now but i only created it to properly identify the action. Thus if i receive an instanceOf User i can be sure that some user is to be banned. That way i dont have to hardcode strings. I mean first i thought of sending something like "Banned User :" + userName and then i used to check if string startsWith "Banned User :" then i take some action :p. I've created a User class but it means no importance to me in my program. I want to know whether directly sending strings is good way or create a class for every action that is good. If i am not clear please let me know. If i have hundreds of action do i have to create hundreds of classes so i can check via instanceOf? Say now if i plan to create a BUZZ like facility that is available in yahoo messenger. Should i again create an another class named BUZZ? so it can be identified easily?

    Read the article

  • Login problem with php

    - by shinod
    I want to prevent multiple log in with same log in credentials simultaneously. So I made a column login_status and set it to 1 when some one logging in and change to 0 when logging out besides I set session after successful logged in. If user won't click on log out(in case of user close tab or because of some network problem) it doesn't update database and then one can't use that log in credentials again. So I use a ajax call to set current time stamp in database with related log in credentials and it is updated in each 2 minutes if user not navigate from that page. Then if some one attempts to log in with same log in credentials, it will check these time stamp if column login_status is 1, then if the time stamp is older than 3 minutes it allows the log in.Then it solving that problem. But the new problem is if user closes the tab or browser window and after 3 minutes one can log in with same log in credentials from somewhere and if the previous user open that page automatically it will log in as session is already set. How can I prevent this.

    Read the article

  • Distributing a bundle of files across an extranet

    - by John Zwinck
    I want to be able to distribute bundles of files, about 500 MB per bundle, to all machines on a corporate "extranet" (which is basically a few LANs connected using various private mechanisms, including leased lines and VPN). The total number of hosts is roughly 100, and the goal is to get a copy of the bundle from one host onto all the other hosts reliably, quickly, and efficiently. One important issue is that some hosts are grouped together on single fast LANs in which case the network I/O should be done once from one group to the next and then within each group between all the peers. This is as opposed to a strict central server system where multiple hosts might each fetch the same bundle over a slow link, rather than once via the slow link and then between each other quickly. A new bundle will be produced every few days, and occasionally old bundles will be deleted (but that problem can be solved separately). The machines in question happen to run recent Linuxes, but bonus points will go to solutions which are at least somewhat cross-platform (in which case the bundle might differ per platform but maybe the same mechanism can be used). That's pretty much it. I'm not opposed to writing some code to handle this, but it would be preferable if it were one of bash, Python, Ruby, Lua, C, or C++.

    Read the article

  • PHP cURL error: "Empty reply from server"

    - by ABach
    All, I have a class function to interface with the RESTful API for Last.FM - its purpose is to grab the most recent tracks for my user. Here it is: private static $base_url = 'http://ws.audioscrobbler.com/2.0/'; public static function getTopTracks($options = array()) { $options = array_merge(array( 'user' => 'bachya', 'period' => NULL, 'api_key' => 'xxxxx...', // obfuscated, obviously ), $options); $options['method'] = 'user.getTopTracks'; // Initialize cURL request and set parameters $ch = curl_init(); curl_setopt_array($ch, array( CURLOPT_URL => self::$base_url, CURLOPT_POST => TRUE, CURLOPT_POSTFIELDS => $options, CURLOPT_RETURNTRANSFER => TRUE, CURLOPT_TIMEOUT => 30, CURLOPT_USERAGENT => 'Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)' )); $results = curl_exec($ch); return $results; } This returns "Empty reply from server". I know that some have suggested that this error comes from some fault in network infrastructure; I do not believe this to be true in my case. If I run a cURL request through the command line, I get my data; the Last.FM service is up and accessible. Before I go to those folks and see if anything has changed, I wanted to check with you fine folks and see if there's some issue in my code that would be causing this. Thanks!

    Read the article

  • Is this a good job description? What title would you give this position?

    - by Zack Peterson
    Department: Information Technology Reports To: Chief Information Officer Purpose: Company's ________________ is specifically engaged in the development of World Wide Web applications and distributed network applications. This person is concerned with all facets of the software development process and specializes in software product management. He or she contributes to projects in an application architect role and also performs individual programming tasks. Essential Duties & Responsibilities: This person is involved in all aspects of the software development process such as: Participation in software product definitions, including requirements analysis and specification Development and refinement of simulations or prototypes to confirm requirements Feasibility and cost-benefit analysis, including the choice of architecture and framework Application and database design Implementation (e.g. installation, configuration, customization, integration, data migration) Authoring of documentation needed by users and partners Testing, including defining/supporting acceptance testing and gathering feedback from pre-release testers Participation in software release and post-release activities, including support for product launch evangelism (e.g. developing demonstrations and/or samples) and subsequent product build/release cycles Maintenance Qualifications: Bachelor's degree in computer science or software engineering Several years of professional programming experience Proficiency in the general technology of the World Wide Web: Hypertext Transfer Protocol (HTTP) Hypertext Markup Language (HTML) JavaScript Cascading Style Sheets (CSS) Proficiency in the following principles, practices, and techniques: Accessibility Interoperability Usability Security (especially prevention of SQL injection and cross-site scripting (XSS) attacks) Object-oriented programming (e.g. encapsulation, inheritance, modularity, polymorphism, etc.) Relational database design (e.g. normalization, orthogonality) Search engine optimization (SEO) Asynchronous JavaScript and XML (AJAX) Proficiency in the following specific technologies utilized by Company: C# or Visual Basic .NET ADO.NET (including ADO.NET Entity Framework) ASP.NET (including ASP.NET MVC Framework) Windows Presentation Foundation (WPF) Language Integrated Query (LINQ) Extensible Application Markup Language (XAML) jQuery Transact-SQL (T-SQL) Microsoft Visual Studio Microsoft Internet Information Services (IIS) Microsoft SQL Server Adobe Photoshop

    Read the article

  • What are the most time consuming checks performed by .NET when executing a managed appplication?

    - by ltorje
    I've developed a .NET based Windows service that uses part managed (C#) and unmanaged code (C/C++ libraries). In some domain environments (e.g. Win 2k3 32bit server inside domain abc.com) sometimes the service takes more than 30 seconds to start (especially on OS restart), thus failing to start the service. I suspect that it has something to do with enterprise level security but I do not know for sure. http://msdn.microsoft.com/en-us/library/aa720255%28VS.71%29.aspx I've tried the following without success: - delay loading references by moving the using directives as far as possible from the servicebase implementation (especially the xml namespace - know to cause delays in loading) - delay loading and configuring log4net - precompiling the code by using ngen - delaying the start of the worker thread - add/remove manifest + decencies set inside - sign/unsign the binaries - use the configuration settings (there are a lot of settings and the scope level for all is set to application ) as later as possible - add all dependencies to GAC I didn't tried yet to add security demands for the class that has the Main method implemented. I didn't tries to implement my own configuration loader because after inspecting the autogenerated code, I've noticed that the setting class is a singletone and it gets its instance on call. By completely removing the log4net dependency it worked, but this is not an option. When the network card is disabled the service starts immediately. Any suggestions/comments/solution you have would be most welcomed.

    Read the article

  • C# connect to domain SQL Server 2005 from non-domain machine

    - by user304582
    Hi, I asked a question a few days ago (http://stackoverflow.com/questions/2795723/access-to-sql-server-2005-from-a-non-domain-machine-using-windows-authentication) which got some interesting, but not usable suggestions. I'd like to ask the question again, but make clear what my constraints are: I have a Windows domain within which a machine is running SQL Server 2005 and which is configured to support only Windows authentication. I would like to run a C# client application on a machine on the same network, but which is NOT on the domain, and access a database on the SQL Server 2005 instance. I CANNOT create or modify OS or SQL Server users on either machine, and I CANNOT make any changes to permissions or impersonation, and I CANNOT make use of runas. I know that I can write Perl and Java applications that can connect to the SQL Server database using only these four parameters: server name, database name, username (in the form domain\user), and password. In C# I have tried various things around: string connectionString = "Data Source=server;Initial Catalog=database;User Id=domain\user;Password=password"; SqlConnection connection = new SqlConnection(connectionString); connection.Open(); and tried setting integrated security to true and false, but nothing seems to work. Is what I am trying to do simply impossible in C#? Thanks for any help, Martin

    Read the article

  • Converted PowerBuilder to ASP.Net browsing Errors

    - by user493325
    I had a powerbuilder application which i converted to web application in the format of ASP.Net (aspx) files. after deploying and publishing the converted web application (copy it and add ASP.Net and network Service AND IUser permissions to enable users to access it) in IIS V6.0 over Windows server 2003 and The ASP.Net version is 2.0 The error messages I get when I browse default.aspx web page are as the following:- Server Error in '/' Application. Runtime Error Description: An application error occurred on the server. The current custom error settings for this application prevent the details of the application error from being viewed remotely (for security reasons). It could, however, be viewed by browsers running on the local server machine. Details: To enable the details of this specific error message to be viewable on remote machines, please create a tag within a "web.config" configuration file located in the root directory of the current web application. This tag should then have its "mode" attribute set to "Off". <!-- Web.Config Configuration File --> <configuration> <system.web> <customErrors mode="Off"/> </system.web> </configuration> Notes: The current error page you are seeing can be replaced by a custom error page by modifying the "defaultRedirect" attribute of the application's configuration tag to point to a custom error page URL. <!-- Web.Config Configuration File --> <configuration> <system.web> <customErrors mode="RemoteOnly" defaultRedirect="mycustompage.htm"/> </system.web> </configuration> Another error message appears on the server is:- Server Error in '/' Application. Configuration Error <roleManager enabled="true"> <membership> </roleManager> Thanks in Advance...

    Read the article

  • Connect to a remote Oracle 11g server using OracleClient of .NET 2.0

    - by Raghu M
    I have to connect to a Oracle server on the network using a .NET / C# (Winform) application. I am trying to use System.Data.OracleClient but in vain. Here are the details I can possibly think of (that might help someone reading this question): Platform: Visual Studio 2005 / .NET 2.0 with C# on Windows Vista Home Premium Library: System.Data.OracleClient Server: Oracle 11g (located on the same LAN) Please note that I don't have Oracle installed locally and I have hunted every discussion forum possible for help - but most of them assume local Oracle installation! Here is my connection string: "User Id=TSUSER;Password=ts12TS;Data Source=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=MyServerIP)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=ORCL)));" And I get this error: OCIEnvCreate failed with return code -1 but error message text was not available. Stack trace: at System.Data.OracleClient.OciHandle..ctor(OciHandle parentHandle, HTYPE handleType, MODE ocimode, HANDLEFLAG handleflags) at System.Data.OracleClient.OracleInternalConnection.OpenOnLocalTransaction(String userName, String password, String serverName, Boolean integratedSecurity, Boolean unicode, Boolean omitOracleConnectionName) at System.Data.OracleClient.OracleInternalConnection..ctor(OracleConnectionString connectionOptions) at System.Data.OracleClient.OracleConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection(DbConnection owningConnection, DbConnectionPool pool, DbConnectionOptions options) at System.Data.ProviderBase.DbConnectionPool.CreateObject(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.UserCreateRequest(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.OracleClient.OracleConnection.Open() at DGKit.Util.DataUtil.Generate() in D:\SVNRoot\sandbox\DGDev\Util\DataUtil.cs:line 68

    Read the article

  • sql-server performance optimization by removing print statements

    - by AG
    We're going through a round of sql-server stored procedure optimizations. The one recommendation we've found that clearly applies for us is 'SET NOCOUNT ON' at the top of each procedure. (Yes, I've seen the posts that point out issues with this depending on what client objects you run the stored procedures from but these are not issues for us.) So now I'm just trying to add in a bit of common sense. If the benefit of SET NOCOUNT ON is simply to reduce network traffic by some small amount every time, wouldn't it also make sense to turn off all the PRINT statements we have in the stored procedures that we only use for debugging? I can't see how it can hurt performance. OTOH, it's a bit of a hassle to implement due to the fact that some of the print statements are the only thing within else clauses, so you can't just always comment out the one line and be done. The change carries some amount of risk so I don't want to do it if it isn't going to actually help. But I don't see eliminating print statements mentioned anywhere in articles on optimization. Is that because it is so obvious no one bothers to mention it?

    Read the article

  • Using boost asio for pub/sub style tcp in a game loop

    - by unohoo
    I have been reading through the boost asio documentation for a couple of hours now, and while I think the documentation is really great, I am still left a bit confused on how to implement the system that I need. I have to stream info, from a game engine, to a list of computers over tcp. One snag is that, unlike traditional pub/sub, the computer that does the distribution of info is actually the computer that has to connect to the subscribers as well (instead of the subscribers registering with the publisher). This is done via a config file - a list of ip's/ports along with the data that they each require. The subscribers listen, but do not know the ip of the publisher. (As a side note, I'm quite new to network programming, so maybe I'm missing something .. but it's strange that I do not find much information regarding this style of "inverted" client-server model..) I am looking for suggestions for the implementation of such a system using boost asio. Of course I have to integrate the networking into an already existing engine, so with regards to that: What would be a good way to handle messages being sent to multiple computers every frame? Use async_write, call io_service.run and then reset every frame? Would having io_service.run have its own thread be better? Or should I just use threads and use blocking writes?

    Read the article

  • Sync Framework Considerations for Smart Client app

    - by DarkwingDuck
    Microsoft Sync Framework with SQL 2005? Is it possible? It seems to hint that the OOTB providers use SQL2008 functionality. I'm looking for some quick wins in relation to a sync project. The client app will be offline for a number of days. There will be a central server that MUST be SQL Server 2005. I can use .net 3.5. Basically the client app could go offline for a week. When it comes back online it needs to sync its data. But the good thing is that the data only needs to push to the server. The stuff that syncs back to the client will just be lookup data which the client never changes. So this means I don't care about sync collisions. To simplify the scenario for you, this smart client goes offline and the user surveys data about some observations. They enter the data into the system. When the laptop is reconnected to the network, it syncs back all that data to the server. There will be other clients doing the same thing too, but no one ever touches each other's data. Then there are some reports on the server for viewing the data that has been pushed to the server. This also needs to use ClickOnce. My biggest concern is that there is an interim release while a client is offline. This release might require a new field in the database, and a new field to fill in on the survey. Obviously that new field will be nullable because we can't update old data, that's fine to set as an assumption. But when the client connects up and its local data schema and the server schema don't match, will sync framework be able to handle this? After the data is pushed to the server it is discarded locally. Hope my problem makes sense.

    Read the article

  • Large flags enumerations in C#

    - by LorenVS
    Hey everyone, got a quick question that I can't seem to find anything about... I'm working on a project that requires flag enumerations with a large number of flags (up to 40-ish), and I don't really feel like typing in the exact mask for each enumeration value: public enum MyEnumeration : ulong { Flag1 = 1, Flag2 = 2, Flag3 = 4, Flag4 = 8, Flag5 = 16, // ... Flag16 = 65536, Flag17 = 65536 * 2, Flag18 = 65536 * 4, Flag19 = 65536 * 8, // ... Flag32 = 65536 * 65536, Flag33 = 65536 * 65536 * 2 // right about here I start to get really pissed off } Moreover, I'm also hoping that there is an easy(ier) way for me to control the actual arrangement of bits on different endian machines, since these values will eventually be serialized over a network: public enum MyEnumeration : uint { Flag1 = 1, // BIG: 0x00000001, LITTLE:0x01000000 Flag2 = 2, // BIG: 0x00000002, LITTLE:0x02000000 Flag3 = 4, // BIG: 0x00000004, LITTLE:0x03000000 // ... Flag9 = 256, // BIG: 0x00000010, LITTLE:0x10000000 Flag10 = 512, // BIG: 0x00000011, LITTLE:0x11000000 Flag11 = 1024 // BIG: 0x00000012, LITTLE:0x12000000 } So, I'm kind of wondering if there is some cool way I can set my enumerations up like: public enum MyEnumeration : uint { Flag1 = flag(1), // BOTH: 0x80000000 Flag2 = flag(2), // BOTH: 0x40000000 Flag3 = flag(3), // BOTH: 0x20000000 // ... Flag9 = flag(9), // BOTH: 0x00800000 } What I've Tried: // this won't work because Math.Pow returns double // and because C# requires constants for enum values public enum MyEnumeration : uint { Flag1 = Math.Pow(2, 0), Flag2 = Math.Pow(2, 1) } // this won't work because C# requires constants for enum values public enum MyEnumeration : uint { Flag1 = Masks.MyCustomerBitmaskGeneratingFunction(0) } // this is my best solution so far, but is definitely // quite clunkie public struct EnumWrapper<TEnum> where TEnum { private BitVector32 vector; public bool this[TEnum index] { // returns whether the index-th bit is set in vector } // all sorts of overriding using TEnum as args } Just wondering if anyone has any cool ideas, thanks!

    Read the article

  • .Net long-running scheduled code execution

    - by Prof Plum
    I am working on a couple of projects now where I really wish there was some sort of component that I could specify a time and date, and then execute some sort of method. DateTime date = new DateTime(x,x,x,x,x,x); ScheduledMethod sMethod = new ScheduledMethod(date, [method delegate of some sort]); \\at the specified date, sMethod invokes [method delegate of some sort] I know that I can do this with Windows Workflow Foundation as a long running process, which is good for certain things, but are there any alternatives? Workflow is not exactly straight forward with the details, and it would be nice to be able to deploy something more simple for light weight tasks. An example would be a method that checks a network folder once a day and deletes any files that are more than 30 days old. I realize that this may be pie in the sky dreaming, but this would be extremely useful for automating certain mundane maintinence tasks (scheduled sql operations, file system cleansing, routine email sending, etc.). It does not necessarily have to be .Net, but that is where I am coming from. Any ideas?

    Read the article

  • Developing on both Windows & Linux machines simultaneously

    - by Jamie
    Sorry for the bad title (couldn't think of a better way to describe it) I have a windows machine which I do development on. However, I have a new project which needs to interact with a linux system (executing linux commands etc.). So, obviously I can't do development on my windows machine..and I don't wish to code on the dev machine, svn commit and then svn update it on the linux machine. Is there a way where any changes I make on my dev machine will be quickly mirrored to the linux machine? SVN is not a very quick alternative and of course some changes will be very minor. Any ideas? A network share I guess....but that's not very pretty (bit slow too). As fellow developers I would like to know if you've been in a similar situation and how you've resolved it. On a furthernote, I can't just install Ubuntu as my development machine and mirror the commands, applications etc. from the linux machine because it's a cluster 'master' machine and so therefore it has quite a special configuration. Thanks guys! EDIT: I've also thought about having web services on the linux machine and then just calling them from code thus seperating platform development dependency. What do you think about that too? thanks

    Read the article

  • MsSql Server high Resource Waits and Head Blocker

    - by MartinHN
    Hi I have a MS SQL Server 2008 Standard installation running a database for a webshop. The current size of the database is 2.5 GB. Running on Windows 2008 Standard. Dual Intel Xeon X5355 @ 2.00 GHz. 4 GB RAM. When I open the Activity Monitor, I see that I have a Wait Time (ms/sec) of 5000 in the "Other" category. In the Processes list, all connections from the webshop, the Head Blocker value is 1. I see every day that when I try to access the website, it can take 20-30 secs before it even starts to "work". I know that it is not network latency. (I have a 301 redirect from the same server that is executed instantly). When the first request has been served, it seems as if it's not a sleep anymore and every subsequent request is served instantly with the speed of light. The problem was worse two weeks ago, until I changed every query to include WITH (NOLOCK). But I still experience the problem, and the Wait times in the Activity Monitor is about the same. The largest table (Images) has 32764 rows (448576 KB). Some tables exceed 300000 rows, thought they're much smaller in size than the Images table. I have the default clustered index for every primary key column, only. Any ideas?

    Read the article

  • Using SetParent to steal the main window of another process but keeping the message loops separate

    - by insta
    Background: My coworker and I are maintaining a million-line legacy application we inherited. Its frontend is written in VB6, and as we're devoting almost all of our resources to converting it to C#, we are looking for quick & dirty solutions to our specific problem. The application behaves in a plugin-ish manner. There are up to 20ish separate ActiveX controls that can be loaded at once in a grid-style layout. The problem is that the ActiveX controls do all of their processing on their own UI thread, and as a lot of it is blocking waiting on network access, the UI gets very soupy. When our hosting C# app loads these controls, it becomes unresponsive because of how many controls are chewing up UI resources doing nothing. To top it off, the controls are fragile and will crash at the slightest provocation. When they are hosted in the main C# app, it creates serious instability. The best my coworker and I have come up with so far is starting a process per ActiveX control. This process, which we call the proxy, is another winforms app. It uses named pipes to communicate with the hosting process. The hosting process creates a window, loads an ActiveX control of our choice (via some reflections & AxHost magic), and tells the main process what its window handle is via the named pipe. The main process uses a combination of SetParent, and SetWindowPos to move the proxy application into itself to emulate a plugin. Size updates are sent via the named pipe. This works well enough until the ActiveX application does some sort of lengthy process and we click around on the main window while it's working. For awhile the main window is responsive, but eventually it becomes unresponsive as the child window waits for its UI thread. How can we keep the child windows on their own complete thread while still getting the benefits of SetParent? (please let me know if anything isn't clear!)

    Read the article

< Previous Page | 715 716 717 718 719 720 721 722 723 724 725 726  | Next Page >