Search Results

Search found 41147 results on 1646 pages for 'database security'.

Page 765/1646 | < Previous Page | 761 762 763 764 765 766 767 768 769 770 771 772  | Next Page >

  • C# reference collection for storing reference types

    - by ivo s
    I like to implement a collection (something like List<T>) which would hold all my objects that I have created in the entire life span of my application as if its an array of pointers in C++. The idea is that when my process starts I can use a central factory to create all objects and then periodically validate/invalidate their state. Basically I want to make sure that my process only deals with valid instances and I don't re-fetch information I already fetched from the database. So all my objects will basically be in one place - my collection. A cool thing I can do with this is avoid database calls to get data from the database if I already got it (even if I updated it after retrieval its still up-to-date if of course some other process didn't update it but that a different concern). I don't want to be calling new Customer("James Thomas"); again if I initted James Thomas already sometime in the past. Currently I will end up with multiple copies of the same object across the appdomain - some out of sync other in sync and even though I deal with this using timestamp field on the MSSQL server I'd like to keep only one copy per customer in my appdomain (if possible process would be better). I can't use regular collections like List or ArrayList for example because I cannot pass parameters by their real local reference to the their existing Add() methods where I'm creating them using ref so that's not to good I think. So how can this be implemented/can it be implemented at all ? A 'linked list' type of class with all methods working with ref & out params is what I'm thinking now but it may get ugly pretty quickly. Is there another way to implement such collection like RefList<T>.Add(ref T obj)? So bottom line is: I don't want re-create an object if I've already created it before during the entire application life unless I decide to re-create it explicitly (maybe its out-of-date or something so I have to fetch it again from the db). Is there alternatives maybe ?

    Read the article

  • SSRS2005 timeout error

    - by jaspernygaard
    Hi I've been running around circles the last 2 days, trying to figure a problem in our customers live environment. I figured I might as well post it here, since google gave me very limited information on the error message (5 results to be exact). The error boils down to a timeout when requesting a certain report in SSRS2005, when a certain parameter is used. The deployment scenario is: Machine #1 Running reporting services (SQL2005, W2K3, IIS6) Machine #2 Running datawarehouse database (SQL2005, W2K3) which is the data source for #1 Both machines are running on the same vm cluster and LAN. The report requests a fairly simple SP - lets called it sp(param $a, param $b). When requested with param $a filled, it executes correctly. When using param $b, it times out after the global timeout periode has passed. If I run the stored procedure with param $b directly from sql management studio on #2, it returns the results perfectly fine (within 3-4s). I've profiled the datawarehouse database on #2 and when param $b is used, the query from the reporting service to the database, never reaches #2. The error message that I get upon timeout, when using param $b, when invoking the report directly from SSRS web interface is: "An error has occurred during report processing. Cannot read the next data row for the data set DataSet. A severe error occurred on the current command. The results, if any, should be discarded. Operation cancelled by user." The ExecutionLog for the SSRS does give me much information besides the error message rsProcessingAborted I'm running out of ideas of how to nail this problem. So I would greatly appreciate any comments, suggestions or ideas. Thanks in advance!

    Read the article

  • NPE with logging while launching webstart on jre7 update 40

    - by atulsm
    My application was running fine until I upgraded my jre to 7u40. When my application is initializing, it's doing Logger.getLogger("ClassName"), and I'm getting the following exception. java.lang.ExceptionInInitializerError at java.util.logging.Logger.demandLogger(Unknown Source) at java.util.logging.Logger.getLogger(Unknown Source) at com.company.Application.Applet.<clinit>(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at com.sun.javaws.Launcher.executeApplication(Unknown Source) at com.sun.javaws.Launcher.executeMainClass(Unknown Source) at com.sun.javaws.Launcher.doLaunchApp(Unknown Source) at com.sun.javaws.Launcher.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.lang.NullPointerException at java.util.logging.Logger.setParent(Unknown Source) at java.util.logging.LogManager$6.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.util.logging.LogManager.doSetParent(Unknown Source) at java.util.logging.LogManager.access$1100(Unknown Source) at java.util.logging.LogManager$LogNode.walkAndSetParent(Unknown Source) at java.util.logging.LogManager$LoggerContext.addLocalLogger(Unknown Source) at java.util.logging.LogManager$LoggerContext.addLocalLogger(Unknown Source) at java.util.logging.LogManager.addLogger(Unknown Source) at java.util.logging.LogManager$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.util.logging.LogManager.<clinit>(Unknown Source) The exception is coming from this line: private static Logger logger = Logger.getLogger(Applet.class.getName()); Could it be because of any sideeffects with fix http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=8017174 ? A workaround is to open the java control center and enable logging. This is a concern since by default "Enable Logging" is unchecked. If I select "Enable Logging", the application starts fine.

    Read the article

  • UAC need for console application

    - by Daok
    I have a console application that require to use some code that need administrator level. I have read that I need to add a Manifest file myprogram.exe.manifest that look like that : <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v3"> <security> <requestedPrivileges> <requestedExecutionLevel level="requireAdministrator"> </requestedPrivileges> </security> </trustInfo> </assembly> But it still doesn't raise the UAC (in the console or in debugging in VS). How can I solve this issue? Update I am able to make it work if I run the solution in Administrator or when I run the /bin/*.exe in Administrator. I am still wondering if it's possible to have something that will pop when the application start instead of explicitly right clickRun as Administrator?

    Read the article

  • Allowing Google to bypass CAPTCHA verification - sensible or not?

    - by edanfalls
    My web site has a database lookup; filling out a CAPTCHA gives you 5 minutes of lookup time. There is also some custom code to detect any automated scripts. I do this as I don't want someone data mining my site. The problem is that Google does not see the lookup results when it crawls my site. If someone is searching for a string that is present in the result of a lookup, I would like them to find this page by Googling it. The obvious solution to me is to use the PHP variable $_SERVER['HTTP_USER_AGENT'] to bypass the CAPTCHA and custom security code for the Google bots. My question is whether this is sensible or not. People could then use Google's cache to view the lookup results without having to fill out the CAPTCHA, but would Google's own script detection methods prevent them from data mining these pages? Or would there be some way for people to make $_SERVER['HTTP_USER_AGENT'] appear as Google to bypass the security measures? Thanks in advance.

    Read the article

  • Editing a .class file directly, playing around with opcodes

    - by echox
    Hi, today I just tried to play a little bit around with the opcodes in compiled java class file. After inserting iinc 1,1 the java virtual machine responds with: Exception in thread "main" java.lang.ClassFormatError: Truncated class file at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClassCond(ClassLoader.java:632) at java.lang.ClassLoader.defineClass(ClassLoader.java:616) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141) at java.net.URLClassLoader.defineClass(URLClassLoader.java:283) at java.net.URLClassLoader.access$000(URLClassLoader.java:58) at java.net.URLClassLoader$1.run(URLClassLoader.java:197) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:248) Could not find the main class: Test. Program will exit. This is my example source code: public class Test { public static void main(String[] args) { int i = 5; i++; i++; i++; System.out.println("Number: " + i + "\n"); } } The opcode for an increment is 0x84 + 2 bytes for operands. There's only one section in the resulting class file, which contains 0x84: [..] 8401 0184 0101 8401 01[..] So I would translate this as: iinc 1,1 iinc 1,1 iinc 1,1 corresponding to my i++; i++; i++; I then tried to append just 840101 to increment the variable once more, but that didn't work and resulted in the ClassFormatError. Is there anything like a checksum for the class file? I looked up the format of a classfile in http://java.sun.com/docs/books/jvms/second_edition/html/ClassFile.doc.html but could not find anything which points out to some kind of bytes_of_classfile or something. I also don't understand why the error is "Truncated Class File", because I did append something :-) I know its not a good idea to edit class files directly, but I'm just interested on the VM internals here.

    Read the article

  • tipfy for Google App Engine: Is it stable? Can auth/session components of tipfy be used with webapp?

    - by cv12
    I am building a web application on Google App Engine that requires users to register with the application and subsequently authenticate with it and maintain sessions. I don't want to force users to have Google accounts. Also, the target audience for the application is the average non-geek, so I'm not very keen on using OpenID or OAuth. I need something simple like: User registers with an e-mail and password, and then can log back in with those credentials. I understand that this approach does not provide the security benefits of Google or OpenID authentication, but I am prepared to trade foolproof security for end-user convenience and hassle-free experience. I explored Django, but decided that consecutive deprecations from appengine-helper to app-engine-patch to django-nonrel may signal that path may be a bit risky in the long-term. I'd like to use a code base that is likely to be maintained consistently. I also explored standalone session/auth packages like gaeutilities and suas. GAEUtilities looked a bit immature (e.g., the code wasn't pythonic in places, in my opinion) and SUAS did not give me a lot of comfort with the cookie-only sessions. I could be wrong with my assessment of these two, so I would appreciate input on those (or others that may serve my objective). Finally, I recently came across tipfy. It appears to be based on Werkzeug and Alex Martelli spoke highly of it here on stackoverflow. I have two primary questions related to tipfy: As a framework, is it as mature as webapp? Is it stable and likely to be maintained for some time? Since my primary interest is the auth/session components, can those components of the tipfy framework be used with webapp, independent of the broader tipfy framework? If yes, I would appreciate a few pointers to how I could go about doing that.

    Read the article

  • SQL2008 merge replication fails to update depdendent items when table is added

    - by Dan Puzey
    Setup: an existing SQL2008 merge replication scenario. A large server database, including views and stored procs, being replicated to client machines. What I'm doing: * adding a new table to the database * mark the new table for replication (using SP_AddMergeArticle) * alter a view (which is already part of the replicated content) is updated to include fields from this new table (which is joined to the tables in the existing view). A stored procedure is similarly updated. The problem: the table gets replicated to client machines, but the view is not updated. The stored procedure is also not updated. Non-useful workaround: if I run the snapshot agent after calling SP_AddMergeArticle and before updating the view/SP, both the view and the stored procedure changes correctly replicate to the client. The bigger problem: I'm running a list of database scripts in a transaction, as part of a larger process. The snapshot agent can't be run during a transaction, and if I interrupt the transaction (e.g. by running the scripts in multiple transactions), I lose the ability to roll back the changes should something fail. Does anyone have any suggestions? It seems like I must be missing something obvious, because I don't see why the changes to the view/sproc wouldn't be replicating anyway, regardless of what's going on with the new table.

    Read the article

  • Is there an XML XQuery interface to existing XML files?

    - by xsaero00
    My company is in education industry and we use XML to store course content. We also store some course related information (mostly metainfo) in relational database. Right now we are in the process of switching from our proprietary XML Schema to DocBook 5. Along with the switch we want to move course related information from database to XML files. The reason for this is to have all course data in one place and to put it under Subversion. However, we would like to keep flexibility of the relational database and be able to easily extract specific information about a course from an XML document. XQuery seems to be up to the task so I was researching databases that supports it but so far could not find what I needed. What I basically want, is to have my XML files in a certain directory structure and then on top of this I would like to have a system that would index my files and let me select anything out of any file using XQuery. This way I can have "my cake and eat it too": I will have XQuery interface and still keep my files in plain text and versioned. Is there anything out there at least remotely resembling to what I want? If you think that what I an asking for is nonsense please make an alternative suggestion. On the related note: What XML Databases (preferably native and open source) do you have experience with and what would you recommend?

    Read the article

  • SiteMap control based on user roles doesn't works

    - by nCdy
    <siteMapNode roles="*"> <siteMapNode url="~/Default.aspx" title=" Main" description="Main" roles="*"/> <siteMapNode url="~/Items.aspx" title=" Adv" description="Adv" roles="Administrator"/> .... any user can see Adv page. That is a trouble and a qustion : why and how to hide out of role sitenodes. but if I do HttpContext.Current.User.IsInRole("Administrator") it shows me if user in Administrator role or not. web config : <authentication mode="Forms"/> <membership defaultProvider="SqlProvider" userIsOnlineTimeWindow="20"> <providers> <add connectionStringName="FlowWebSQL" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="true" passwordFormat="Hashed" applicationName="/" name="SqlProvider" type="System.Web.Security.SqlMembershipProvider"/> </providers> </membership> <roleManager enabled="true" defaultProvider="SqlProvider"> <providers> <add connectionStringName="FlowWebSQL" name="SqlProvider" type="System.Web.Security.SqlRoleProvider" /> </providers> </roleManager>

    Read the article

  • How to set connection string dynamically in NHibernate

    - by jcreddy
    Hi I want assign connection string for NHibernate using following code and getting exception (bold). log4net.Config.DOMConfigurator.Configure(); Configuration config = new Configuration(); IDictionary props = new Hashtable(); props["hibernate.connection.provider"] = "NHibernate.Connection.DriverConnectionProvider"; props["hibernate.dialect"] = "NHibernate.Dialect.MsSql2000Dialect"; props["hibernate.connection.driver_class"] = "NHibernate.Driver.SqlClientDriver"; props["hibernate.connection.connection_string"] = @"Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=Sample;Data Source=HYDHTC92318D\SQLEXPRESS"; props["hibernate.connection.current_session_context_class"] = "web"; props["hibernate.connection.show_sql"] = "true"; props["hibernate.connection.proxyfactoryfactory.factory_class"] = "NHibernate.ByteCode.Castle.ProxyFactoryFactory, NHibernate.ByteCode.Castle"; foreach (DictionaryEntry de in props) { config.SetProperty(de.Key.ToString(), de.Value.ToString()); } config.AddAssembly("nhibernator"); factory = config.BuildSessionFactory(); session = factory.OpenSession(); The ProxyFactoryFactory was not configured. Initialize 'proxyfactory.factory_class' property of the session-factory configuration section with one of the available NHibernate.ByteCode providers. Example: NHibernate.ByteCode.LinFu.ProxyFactoryFactory, NHibernate.ByteCode.LinFu Example: NHibernate.ByteCode.Castle.ProxyFactoryFactory, NHibernate.ByteCode.Castle Please let me know the solution. Regards JCReddy

    Read the article

  • "Downloading" a computed value form JavaScript

    - by Travis Jensen
    I'm hoping you can prove me wrong here (please, please, please! ;). I have a situation where I need to download encrypted data from a Server D (for "Data"). Server K (for "Key") has the encryption key. For security sake, I would really prefer that Server D never know the key that Server K knows. What I want is my client (e.g. your browser) to connect to Server D for the data and Server K for the key and doe the decryption locally so the unencrypted stuff never leaves your computer. I can do this fine for text areas in the dom by replacing the contents of the HTML. However, sometimes, I would like to do larger files that I stream to the file system. For instance, perhaps I want to encrypt a movie and decrypt it and stream the contents to the my video player. I am not a JavaScript guru by any stretch, especially when it comes to the edge cases of things like the security sandbox. For Small D, I can handle the decryption, but I don't know how to save the decrypted file. Large D seems problematic as memory runs out. Anybody have any ideas that don't involve native plugins? Thanks!

    Read the article

  • Login for webapp, needs to be available for support staff

    - by Christian W
    I know the title is a little off, but it's hard to explain the problem in a short sentence. I am the administrator of a legacy webapp that lets users create surveys and distribute them to a group of people. We have two kinds of "users". Authorized licenseholders which does all setup themselves. Clients who just want to have a survey run, but still need a user (because the webapp has "User" as the top entity in a surveyenvironment.) Sometimes users in #1 want us to do the setup for them (which we offer to do). This means that we have to login as them. This is also how we do support: we login as them and then follow them along, guiding them. Which brings me to my dilemma. Currently our security is below par. But this makes it simple for us to do support. We do want to increase our security, and one thing I have been considering is just doing the normal hashing to DB, however, we need to be able to login as a customer, and if they change their password without telling us, and the password is hashed in the db, we have no way of knowing it. So I was thinking of some kind of twoway encryption for the passwords. Either that or some kind of master password. Any suggestions? (The platform is classic ASP... I said it was legacy...)

    Read the article

  • Show last 4 table entries mysql php

    - by user272899
    I have a movie database Kind of like a blog and I want to display the last 4 created entries. I have a column in my table for timestamp called 'dateadded'. Using this code how would I only display the 4 most recent entries to table <?php //connect to database mysql_connect($mysql_hostname,$mysql_user,$mysql_password); @mysql_select_db($mysql_database) or die("<b>Unable to connect to specified database</b>"); //query databae $query = "SELECT * FROM movielist"; $result=mysql_query($query) or die('Error, insert query failed'); $row=0; $numrows=mysql_num_rows($result); while($row<$numrows) { $id=mysql_result($result,$row,"id"); $imgurl=mysql_result($result,$row,"imgurl"); $imdburl=mysql_result($result,$row,"imdburl"); ?> <div class="moviebox rounded"><a href="http://<?php echo $domain; ?>/viewmovie?movieid=<?php echo $id; ?>" rel="facebox"> <img src="<?php echo $imgurl; ?>" /> <form method="get" action=""> <input type="text" name="link" class="link" style="display:none" value="http://us.imdb.com/Title?<?php echo $imdburl; ?>"/> </form> </a></div> <?php $row++; } ?>

    Read the article

  • XElement vs Dcitionary

    - by user135498
    Hi All, I need advice. I have application that imports 10,000 rows containing name & address from a text file into XElements that are subsequently added to a synchronized queue. When the import is complete the app spawns worker threads that process the XElements by deenqueuing them, making a database call, inserting the database output into the request document and inserting the processed document into an output queue. When all requests have been processed the output queue is written to disk as an XML doc. I used XElements for the requests because I needed the flexibility to add fields to the request during processing. i.e. Depending on the job type the app might require that it add phone number, date of birth or email address to a request based on a name/address match against a public record database. My questions is; The XElements seems to use quite a bit of memory and I know there is a lot of parsing as the document makes its way through the processing methods. I’m considering replacing the XElements with a Dictionary object but I’m skeptical the gain will be worth the effort. In essence it will accomplish the same thing. Thoughts?

    Read the article

  • jQuery ajax delete script not actually deleting.

    - by werm
    I have a little personal webapp that I'm working on. I have a link that, when clicked, is supposed to make an ajax call to a php that is supposed to delete that info from a database. For some unknown reason, it won't actually delete the row from the database. I've tried everything I know, but still nothing. I'm sure it's something incredibly easy... Here are the scripts involved. Database output: $sql = "SELECT * FROM bookmark_app"; foreach ($dbh->query($sql) as $row) { echo '<div class="box" id="',$row['id'],'"><img src="images/avatar.jpg" width="75" height="75" border="0" class="avatar"/> <div class="text"><a href="',$row['url'],'">',$row['title'],'</a><br/> </div> /*** Click to delete ***/ <a href="?delete=',$row['id'],'" class="delete">x</a></div> <div class="clear"></div>'; } $dbh = null; Ajax script: $(document).ready(function() { $("a.delete").click(function(){ var element = $(this); var noteid = element.attr("id"); var info = 'id=' + noteid; $.ajax({ type: "GET", url: "includes/delete.php", data: info, success: function(){ element.parent().eq(0).fadeOut("slow"); } }); return false; }); }); Delete code: include('connect.php'); //delete.php?id=IdOfPost if($_GET['id']){ $id = $_GET['id']; //Delete the record of the post $delete = mysql_query("DELETE FROM `db` WHERE `id` = '$id'"); //Redirect the user header("Location:xxxx.php"); }

    Read the article

  • Zend Framework: Zend_DB Error

    - by Sergio E.
    I'm trying to learn ZF, but got strange error after 20 minutes :) Fatal error: Uncaught exception 'Zend_Db_Adapter_Exception' with message 'Configuration array must have a key for 'dbname' that names the database instance' What does this error mean? I got DB information in my config file: resources.db.adapter=pdo_mysql resources.db.host=localhost resources.db.username=name resources.db.password=pass resources.db.dbname=name Any suggestions? EDIT: This is my model file /app/models/DbTable/Bands.php: class Model_DbTable_Bands extends Zend_Db_Table_Abstract { protected $_name = 'zend_bands'; } Index controller action: public function indexAction() { $albums = new Model_DbTable_Bands(); $this->view->albums = $albums->fetchAll(); } EDIT All codes: bootstrap.php protected function _initAutoload() { $autoloader = new Zend_Application_Module_Autoloader(array( 'namespace' => '', 'basePath' => dirname(__FILE__), )); return $autoloader; } protected function _initDoctype() { $this->bootstrap('view'); $view = $this->getResource('view'); $view->doctype('XHTML1_STRICT'); } public static function setupDatabase() { $config = self::$registry->configuration; $db = Zend_Db::factory($config->db); $db->query("SET NAMES 'utf8'"); self::$registry->database = $db; Zend_Db_Table::setDefaultAdapter($db); Zend_Db_Table_Abstract::setDefaultAdapter($db); } IndexController.php class IndexController extends Zend_Controller_Action { public function init() { /* Initialize action controller here */ } public function indexAction() { $albums = new Model_DbTable_Bands(); $this->view->albums = $albums->fetchAll(); } } configs/application.ini, changed database and provided password: [development : production] phpSettings.display_startup_errors = 1 phpSettings.display_errors = 1 db.adapter = PDO_MYSQL db.params.host = localhost db.params.username = root db.params.password = pedro db.params.dbname = test models/DbTable/Bands.php class Model_DbTable_Bands extends Zend_Db_Table_Abstract { protected $_name = 'cakephp_bands'; public function getAlbum($id) { $id = (int)$id; $row = $this->fetchRow('id = ' . $id); if (!$row) { throw new Exception("Count not find row $id"); } return $row->toArray(); } }

    Read the article

  • Backdoor in OpenBSD how is it that no developer saw it ? And what about other Linux ? [closed]

    - by user310291
    It had been revealed that there have been backdoor implanted in OpenBSD http://www.infoworld.com/d/developer-world/software-security-honesty-the-best-policy-285 OpenBSD is opensource, how is it that nobody in the community developper could see it in the source code ? So how can one trust all the other "opensource" Linux Of course OpenBSD is only a case, the point is not about OpenBSD, it is about opensource in general. my question is not about Openbsd per se it's about source code os inspection especially c/c++ since most are written in these languages. Also once the source is compiled how one can be sure that it really reflects the source code ? If a law requires that a backdoor being implanted and obliges to deny that kind of action under the guise of security, how can you be sure that the system has not been corrupted by some tools ? As said there is there is a "nondisclosure agreement" My guess is that 99.99% of developpers in the world are just incapable of understanding os source code and won't even bother to look at it. And above all nobody wonders about why the gov wants such massive backdoor, and that of course they will pressure medias to deny.

    Read the article

  • MS SQL : Can you help me with this query?

    - by rlb.usa
    I want to run a diagnostic report on our MS SQL 2008 database server. I am looping through all of the databases, and then for each database, I want to look at each table. But, when I go to look at each table (with tbl_cursor), it always picks up the tables in the database 'master'. I think it's because of my tbl_cursor selection : SELECT table_name FROM information_schema.tables WHERE table_type = 'base table' How do I fix this? Here's the entire code: SET NOCOUNT ON DECLARE @table_count INT DECLARE @db_cursor VARCHAR(100) DECLARE database_cursor CURSOR FOR SELECT name FROM sys.databases where name<>N'master' OPEN database_cursor FETCH NEXT FROM database_cursor INTO @db_cursor WHILE @@Fetch_status = 0 BEGIN PRINT @db_cursor SET @table_count = 0 DECLARE @table_cursor VARCHAR(100) DECLARE tbl_cursor CURSOR FOR SELECT table_name FROM information_schema.tables WHERE table_type = 'base table' OPEN tbl_cursor FETCH NEXT FROM tbl_cursor INTO @table_cursor WHILE @@Fetch_status = 0 BEGIN DECLARE @table_cmd NVARCHAR(255) SET @table_cmd = N'IF NOT EXISTS( SELECT TOP(1) * FROM ' + @table_cursor + ') PRINT N'' Table ''''' + @table_cursor + ''''' is empty'' ' --PRINT @table_cmd --debug EXEC sp_executesql @table_cmd SET @table_count = @table_count + 1 FETCH NEXT FROM tbl_cursor INTO @table_cursor END CLOSE tbl_cursor DEALLOCATE tbl_cursor PRINT @db_cursor + N' Total Tables : ' + CAST( @table_count as varchar(2) ) PRINT N'' -- print another blank line SET @table_count = 0 FETCH NEXT FROM database_cursor INTO @db_cursor END CLOSE database_cursor DEALLOCATE database_cursor SET NOCOUNT OFF

    Read the article

  • How to minor updates to Drupal-6 with shared hosting

    - by marty.fried
    I've got Drupal working on a shared host, and I uploaded some modules from my home system successfully, but I've got the message that there is a security update for my version, and I should update immediately. I'm not sure how I'm supposed to do that. It seems like the update is an entire new installation. I originally installed it using the hosting company's installer, Fantastico. Should I simply over-write the existing installation with the new files? Or ignore the message? I realize I shouldn't over-write the sites folder, or anything I've modified. The instructions that come with the download seem to be for a major version upgrade, and are way too much trouble for frequent security updates. Searching Drupal's site shows many other methods, but no indication of anything official. And some were ridiculously error-prone, and not really useful. I don't have shell access to the hosting site, although I can pay extra to get it if I really need to. Or, maybe I can clone the site on my local Linux system, do the update using a script, then upload the whole thing. Does anyone have experience with this situation?

    Read the article

  • Silverlight 3 + Java WebService

    - by Heko
    Hello! I have a Silverlight 3 project, and I need to call a Java WebService - the bindings are ok (SOAP 1.1 and basicHttpBinding): ClientConfig File: <configuration> <system.serviceModel> <bindings> <basicHttpBinding> <binding name="SkyinfoTestInterfaceExport2_SkyinfoTestInterfaceHttpBinding" maxBufferSize="2147483647" maxReceivedMessageSize="2147483647"> <security mode="None"> <transport> <extendedProtectionPolicy policyEnforcement="Never" /> </transport> </security> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="myAddress" binding="basicHttpBinding" bindingConfiguration="SkyinfoTestInterfaceExport2_SkyinfoTestInterfaceHttpBinding" contract="SkyInfoServiceReference.SkyinfoTestInterface" name="SkyinfoTestInterfaceExport2_SkyinfoTestInterfaceHttpPort" /> </client> </system.serviceModel> When I call a method on client I get this Policy error: An error occurred while trying to make a request to URI '...'. This could be due to attempting to access a service in a cross-domain way without a proper cross-domain policy in place, or a policy that is unsuitable for SOAP services. You may need to contact the owner of the service to publish a cross-domain policy file and to ensure it allows SOAP-related HTTP headers to be sent. This error may also be caused by using internal types in the web service proxy without using the InternalsVisibleToAttribute attribute. Please see the inner exception for more details. I know about those 2 policy XML filesbut Java EE service which I'm trying to call is hosted on a IBM WebSphere Process Server to which I don't have access. Does anybody know how to work around this policy exception?

    Read the article

  • Problem with WHERE columnName = Data in MySQL query in C#

    - by Ryan Sullivan
    I have a C# webservice on a Windows Server that I am interfacing with on a linux server with PHP. The PHP grabs information from the database and then the page offers a "more information" button which then calls the webservice and passes in the name field of the record as a parameter. So i am using a WHERE statement in my query so I only pull the extra fields for that record. I am getting the error: System.Data.SqlClient.SqlException:Invalid column name '42' Where 42 is the value from the name field from the database. my query is string selectStr = "SELECT name, castNotes, triviaNotes FROM tableName WHERE name =\"" + show + "\""; I do not know if it is a problem with my query or something is wrong with the database, but here is the rest of my code for reference. NOTE: this all works perfectly when I grab all of the records, but I only want to grab the record that I ask my webservice for. public class ktvService : System.Web.Services.WebService { [WebMethod] public string moreInfo(string show) { string connectionStr = "MyConnectionString"; string selectStr = "SELECT name, castNotes, triviaNotes FROM tableName WHERE name =\"" + show + "\""; SqlConnection conn = new SqlConnection(connectionStr); SqlDataAdapter da = new SqlDataAdapter(selectStr, conn); DataSet ds = new DataSet(); da.Fill(ds, "tableName"); DataTable dt = ds.Tables["tableName"]; DataRow theShow = dt.Rows[0]; string response = "Name: " + theShow["name"].ToString() + "Cast: " + theShow["castNotes"].ToString() + " Trivia: " + theShow["triviaNotes"].ToString(); return response; } }

    Read the article

  • Using ember-resource with couchdb - how can i save my documents?

    - by Thomas Herrmann
    I am implementing an application using ember.js and couchdb. I choose ember-resource as database access layer because it nicely supports nested JSON documents. Since couchdb uses the attribute _rev for optimistic locking in every document, this attribute has to be updated in my application after saving the data to the couchdb. My idea to implement this is to reload the data right after saving to the database and get the new _rev back with the rest of the document. Here is my code for this: // Since we use CouchDB, we have to make sure that we invalidate and re-fetch // every document right after saving it. CouchDB uses an optimistic locking // scheme based on the attribute "_rev" in the documents, so we reload it in // order to have the correct _rev value. didSave: function() { this._super.apply(this, arguments); this.forceReload(); }, // reload resource after save is done, expire to make reload really do something forceReload: function() { this.expire(); // Everything OK up to this location Ember.run.next(this, function() { this.fetch() // Sub-Document is reset here, and *not* refetched! .fail(function(error) { App.displayError(error); }) .done(function() { App.log("App.Resource.forceReload fetch done, got revision " + self.get('_rev')); }); }); } This works for most cases, but if i have a nested model, the sub-model is replaced with the old version of the data just before the fetch is executed! Interestingly enough, the correct (updated) data is stored in the database and the wrong (old) data is in the memory model after the fetch, although the _rev attribut is correct (as well as all attributes of the main object). Here is a part of my object definition: App.TaskDefinition = App.Resource.define({ url: App.dbPrefix + 'courseware', schema: { id: String, _rev: String, type: String, name: String, comment: String, task: { type: 'App.Task', nested: true } } }); App.Task = App.Resource.define({ schema: { id: String, title: String, description: String, startImmediate: Boolean, holdOnComment: Boolean, ..... // other attributes and sub-objects } }); Any ideas where the problem might be? Thank's a lot for any suggestion! Kind regards, Thomas

    Read the article

  • SQL: Join multiple tables and get a grouped sum

    - by Scienceprodigy
    I have a database with 3 tables that have related data. One table has transactions, and the other two relate to transaction categories. Basically it's financial data, so each transaction has a category (i.e. "gasoline" for a gas purchase transaction). A short version of my Transactions table looks like this- Transactions Table: ________________________________ | ID | Type | Amount | Category | --------------------------------- I also have two more tables relating a category to a categories parent. So basically, every Category entry in the Transactions Table belongs to a parent category (i.e. "gasoline" would belong to say "Automotive Expenses"). For categories, and their parent, I have two tables - Category Children: ____________________________________________ | ID | Parent Category ID | Child Category | -------------------------------------------- Category Parent: ________________________ | ID | Parent Category | ------------------------ What I'm trying to do is query the database and have it return a total spending by parent category. To get "spending" the Type of transactions must be "Debit". I tried the following statement: SELECT category_parents.parent_category, SUM(amount) AS totals FROM (transactions INNER JOIN category_children ON transactions.category = 'category_children.child_category') INNER JOIN category_parents ON category_children.parent_category_id = category_parents._id WHERE trans_type = 'Debit' GROUP BY parent_category ORDER BY totals DESC but it gives me the following exception: 12-31 13:51:21.515: ERROR/Exception on query(4403): android.database.sqlite.SQLiteException: no such column: category_children.parent_category_id: , while compiling: SELECT category_parents.parent_category, SUM(amount) AS totals FROM (transactions INNER JOIN category_children ON transactions.category='category_children.child_category') INNER JOIN category_parents ON category_children.parent_category_id=category_parents._id where trans_type='Debit' group by parent_category order by totals desc Any help is appreciated. (EXTRA CREDIT: I also need to make another statement to do spending by child category, given the parent category)

    Read the article

  • .NET 4.0 Implementing OutputCacheProvider

    - by azamsharp
    I am checking out the OutputCacheProvider in ASP.NET 4.0 and using it to store my output cache into the MongoDb database. I am not able to understand the purpose of Add method which is one of the override methods for OutputCacheProvider. The Add method is invoked when you have VaryByParam set to something. So, if I have VaryByParam = "id" then the Add method will be invoked. But after the Add the Set is also invoked and I can insert into the MongoDb database inside the Set method. public override void Set(string key, object entry, DateTime utcExpiry) { // if there is something in the query and use the path and query to generate the key var url = HttpContext.Current.Request.Url; if (!String.IsNullOrEmpty(url.Query)) { key = url.PathAndQuery; } Debug.WriteLine("Set(" + key + "," + entry + "," + utcExpiry + ")"); _service.Set(new CacheItem() { Key = MD5(key), Item = entry, Expires = utcExpiry }); } Inside the Set method I use the PathAndQuery to get the params of the QueryString and then do a MD5 on the key and save it into the MongoDb database. It seems like the Add method will be useful if I am doing something like VaryByParam = "custom" or something. Can anyone shed some light on the Add method of OutputCacheProvider?

    Read the article

< Previous Page | 761 762 763 764 765 766 767 768 769 770 771 772  | Next Page >