Search Results

Search found 24766 results on 991 pages for 'information retrieval'.

Page 261/991 | < Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >

  • Developer Training – 6 Online Courses to Learn SQL Server, MySQL and Technology

    - by Pinal Dave
    Video courses are the next big thing and I am so happy that I have so far authored 6 different video courses with Pluralsight. Here is the list of the courses. I have listed all of my video courses over here. Note: If you click on the courses and it does not open, you need to login to Pluralsight with a valid username and password or sign up for a FREE trial. Please leave a comment with your favorite course in the comment section. Random 10 winners will get surprise gift via email. Bonus: If you list your favorite module from the course site. SQL Server Performance: Introduction to Query Tuning SQL Server performance tuning is an in-depth topic, and an art to master. A key component of overall application performance tuning is query tuning. Writing queries in an efficient manner, and making sure they execute in the most optimal way possible, is always a challenge. The basics revolve around the details of how SQL Server carries out query execution, so the optimizations explored in this course follow along the same lines. Click to View Course SQL Server Performance: Indexing Basics Indexes are the most crucial objects of the database. They are the first stop for any DBA and Developer when it is about performance tuning. There is a good side as well evil side of the indexes. To master the art of performance tuning one has to understand the fundamentals of the indexes and the best practices associated with the same. This course is for every DBA and Developer who deals with performance tuning and wants to use indexes to improve the performance of the server. Click to View Course SQL Server Questions and Answers This course is designed to help you better understand how to use SQL Server effectively. The course presents many of the common misconceptions about SQL Server, and then carefully debunks those misconceptions with clear explanations and short but compelling demos, showing you how SQL Server really works. This course is for anyone working with SQL Server databases who wants to improve her knowledge and understanding of this complex platform. Click to View Course MySQL Fundamentals MySQL is a popular choice of database for use in web applications, and is a central component of the widely used LAMP open source web application software stack. This course covers the fundamentals of MySQL, including how to install MySQL as well as written basic data retrieval and data modification queries. Click to View Course Building a Successful Blog Expressing yourself is the most common behavior of humans. Blogging has made easy to express yourself. Just like a letter or book has a structure and formula, blogging also has structure and formula. In this introductory course on blogging we will go over a few of the basics of blogging and show the way to get started with blogging immediately. If you already have a blog, this course will be even more relevant as this will discuss many of the common questions and issue you face in your blogging routine. Click to View Course Introduction to ColdFusion ColdFusion is rapid web application development platform. In this course you will learn the basics of how to use ColdFusion platform and rapidly develop web sites. The course begins with learning basics of ColdFusion Markup Language and moves to common development language practices. From there we move to frequent database operations and advanced concepts of Forms, Sessions and Cookies. The last module sums up all the concepts covered in the course with sample application. Click to View Course Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, T SQL, Technology

    Read the article

  • Fast Data - Big Data's achilles heel

    - by thegreeneman
    At OOW 2013 in Mark Hurd and Thomas Kurian's keynote, they discussed Oracle's Fast Data software solution stack and discussed a number of customers deploying Oracle's Big Data / Fast Data solutions and in particular Oracle's NoSQL Database.  Since that time, there have been a large number of request seeking clarification on how the Fast Data software stack works together to deliver on the promise of real-time Big Data solutions.   Fast Data is a software solution stack that deals with one aspect of Big Data, high velocity.   The software in the Fast Data solution stack involves 3 key pieces and their integration:  Oracle Event Processing, Oracle Coherence, Oracle NoSQL Database.   All three of these technologies address a high throughput, low latency data management requirement.   Oracle Event Processing enables continuous query to filter the Big Data fire hose, enable intelligent chained events to real-time service invocation and augments the data stream to provide Big Data enrichment. Extended SQL syntax allows the definition of sliding windows of time to allow SQL statements to look for triggers on events like breach of weighted moving average on a real-time data stream.    Oracle Coherence is a distributed, grid caching solution which is used to provide very low latency access to cached data when the data is too big to fit into a single process, so it is spread around in a grid architecture to provide memory latency speed access.  It also has some special capabilities to deploy remote behavioral execution for "near data" processing.   The Oracle NoSQL Database is designed to ingest simple key-value data at a controlled throughput rate while providing data redundancy in a cluster to facilitate highly concurrent low latency reads.  For example, when large sensor networks are generating data that need to be captured while analysts are simultaneously extracting the data using range based queries for upstream analytics.  Another example might be storing cookies from user web sessions for ultra low latency user profile management, also leveraging that data using holistic MapReduce operations with your Hadoop cluster to do segmented site analysis.  Understand how NoSQL plays a critical role in Big Data capture and enrichment while simultaneously providing a low latency and scalable data management infrastructure thru clustered, always on, parallel processing in a shared nothing architecture. Learn how easily a NoSQL cluster can be deployed to provide essential services in industry specific Fast Data solutions. See these technologies work together in a demonstration highlighting the salient features of these Fast Data enabling technologies in a location based personalization service. The question then becomes how do these things work together to deliver an end to end Fast Data solution.  The answer is that while different applications will exhibit unique requirements that may drive the need for one or the other of these technologies, often when it comes to Big Data you may need to use them together.   You may have the need for the memory latencies of the Coherence cache, but just have too much data to cache, so you use a combination of Coherence and Oracle NoSQL to handle extreme speed cache overflow and retrieval.   Here is a great reference to how these two technologies are integrated and work together.  Coherence & Oracle NoSQL Database.   On the stream processing side, it is similar as with the Coherence case.  As your sliding windows get larger, holding all the data in the stream can become difficult and out of band data may need to be offloaded into persistent storage.  OEP needs an extreme speed database like Oracle NoSQL Database to help it continue to perform for the real time loop while dealing with persistent spill in the data stream.  Here is a great resource to learn more about how OEP and Oracle NoSQL Database are integrated and work together.  OEP & Oracle NoSQL Database.

    Read the article

  • Error when Eclipse started and now my package explorer is empty!

    - by carpenteri
    Friends, Just a quick introduction, I'm currently learning Java, using a combination of the Head First Java book and Eclipse. Everything was going well until tonight! When I started up Eclipse tonight, I saw an error message which I didn't pay attention to (I know! I know!) and acknowledged after which the project explorer was empty where it used to contain my Head First project! After a quick "google" I found the workspace.metadata.log and the errors are shown below. The version of Eclipse I am using is: 20100218-1602 and the only plugin that I use is egit. Any help would be much appreciated. Thanks !SESSION 2010-06-08 19:24:33.841 ----------------------------------------------- eclipse.buildId=unknown java.version=1.5.0_22 java.vendor=Sun Microsystems Inc. BootLoader constants: OS=win32, ARCH=x86, WS=win32, NL=en_GB Framework arguments: -product org.eclipse.epp.package.java.product Command-line arguments: -os win32 -ws win32 -arch x86 -product org.eclipse.epp.package.java.product !ENTRY org.eclipse.ui.workbench 4 2 2010-06-08 19:24:36.475 !MESSAGE Problems occurred when invoking code from plug-in: "org.eclipse.ui.workbench". !STACK 1 org.eclipse.ui.WorkbenchException: Content is not allowed in prolog. at org.eclipse.ui.XMLMemento.createReadRoot(XMLMemento.java:121) at org.eclipse.ui.XMLMemento.createReadRoot(XMLMemento.java:64) at org.eclipse.ui.internal.Workbench$49.run(Workbench.java:1895) at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) at org.eclipse.ui.internal.Workbench.restoreState(Workbench.java:1890) at org.eclipse.ui.internal.WorkbenchConfigurer.restoreState(WorkbenchConfigurer.java:183) at org.eclipse.ui.application.WorkbenchAdvisor$1.run(WorkbenchAdvisor.java:781) Caused by: org.xml.sax.SAXParseException: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:264) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:292) at org.eclipse.ui.XMLMemento.createReadRoot(XMLMemento.java:94) ... 6 more !SUBENTRY 1 org.eclipse.ui 4 0 2010-06-08 19:24:36.475 !MESSAGE Content is not allowed in prolog. !STACK 0 org.xml.sax.SAXParseException: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:264) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:292) at org.eclipse.ui.XMLMemento.createReadRoot(XMLMemento.java:94) at org.eclipse.ui.XMLMemento.createReadRoot(XMLMemento.java:64) at org.eclipse.ui.internal.Workbench$49.run(Workbench.java:1895) at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) at org.eclipse.ui.internal.Workbench.restoreState(Workbench.java:1890) at org.eclipse.ui.internal.WorkbenchConfigurer.restoreState(WorkbenchConfigurer.java:183) at org.eclipse.ui.application.WorkbenchAdvisor$1.run(WorkbenchAdvisor.java:781) !SUBENTRY 1 org.eclipse.ui 4 0 2010-06-08 19:24:36.475 !MESSAGE Content is not allowed in prolog. !STACK 0 org.xml.sax.SAXParseException: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:264) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:292) at org.eclipse.ui.XMLMemento.createReadRoot(XMLMemento.java:94) at org.eclipse.ui.XMLMemento.createReadRoot(XMLMemento.java:64) at org.eclipse.ui.internal.Workbench$49.run(Workbench.java:1895) at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) at org.eclipse.ui.internal.Workbench.restoreState(Workbench.java:1890) at org.eclipse.ui.internal.WorkbenchConfigurer.restoreState(WorkbenchConfigurer.java:183) at org.eclipse.ui.application.WorkbenchAdvisor$1.run(WorkbenchAdvisor.java:781) !ENTRY org.eclipse.jdt.ui 4 10001 2010-06-08 19:24:41.442 !MESSAGE Internal Error !STACK 1 org.eclipse.jdt.internal.ui.JavaUIException: Problems reading information from XML 'OpenTypeHistory.xml' at org.eclipse.jdt.internal.corext.util.History.createException(History.java:70) at org.eclipse.jdt.internal.corext.util.History.load(History.java:257) at org.eclipse.jdt.internal.corext.util.History.load(History.java:166) at org.eclipse.jdt.internal.corext.util.OpenTypeHistory.<init>(OpenTypeHistory.java:199) at org.eclipse.jdt.internal.corext.util.OpenTypeHistory.getInstance(OpenTypeHistory.java:185) at org.eclipse.jdt.internal.ui.JavaPlugin.initializeAfterLoad(JavaPlugin.java:381) at org.eclipse.jdt.internal.ui.InitializeAfterLoadJob$RealJob.run(InitializeAfterLoadJob.java:36) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55) Caused by: org.xml.sax.SAXParseException: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:264) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:292) at org.eclipse.jdt.internal.corext.util.History.load(History.java:255) ... 6 more !SUBENTRY 1 org.eclipse.jdt.ui 4 4 2010-06-08 19:24:41.442 !MESSAGE Problems reading information from XML 'OpenTypeHistory.xml' !STACK 0 org.xml.sax.SAXParseException: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:264) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:292) at org.eclipse.jdt.internal.corext.util.History.load(History.java:255) at org.eclipse.jdt.internal.corext.util.History.load(History.java:166) at org.eclipse.jdt.internal.corext.util.OpenTypeHistory.<init>(OpenTypeHistory.java:199) at org.eclipse.jdt.internal.corext.util.OpenTypeHistory.getInstance(OpenTypeHistory.java:185) at org.eclipse.jdt.internal.ui.JavaPlugin.initializeAfterLoad(JavaPlugin.java:381) at org.eclipse.jdt.internal.ui.InitializeAfterLoadJob$RealJob.run(InitializeAfterLoadJob.java:36) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55) !ENTRY org.eclipse.jdt.ui 4 10001 2010-06-08 19:24:50.435 !MESSAGE Internal Error !STACK 1 org.eclipse.jdt.internal.ui.JavaUIException: Problems reading information from XML 'QualifiedTypeNameHistory.xml' at org.eclipse.jdt.internal.corext.util.History.createException(History.java:70) at org.eclipse.jdt.internal.corext.util.History.load(History.java:257) at org.eclipse.jdt.internal.corext.util.History.load(History.java:166) at org.eclipse.jdt.internal.corext.util.QualifiedTypeNameHistory.<init>(QualifiedTypeNameHistory.java:33) at org.eclipse.jdt.internal.corext.util.QualifiedTypeNameHistory.getDefault(QualifiedTypeNameHistory.java:26) at org.eclipse.jdt.internal.ui.JavaPlugin.stop(JavaPlugin.java:602) at org.eclipse.osgi.framework.internal.core.BundleContextImpl$2.run(BundleContextImpl.java:843) at java.security.AccessController.doPrivileged(Native Method) at org.eclipse.osgi.framework.internal.core.BundleContextImpl.stop(BundleContextImpl.java:836) at org.eclipse.osgi.framework.internal.core.BundleHost.stopWorker(BundleHost.java:474) at org.eclipse.osgi.framework.internal.core.AbstractBundle.suspend(AbstractBundle.java:546) at org.eclipse.osgi.framework.internal.core.Framework.suspendBundle(Framework.java:1098) at org.eclipse.osgi.framework.internal.core.StartLevelManager.decFWSL(StartLevelManager.java:593) at org.eclipse.osgi.framework.internal.core.StartLevelManager.doSetStartLevel(StartLevelManager.java:261) at org.eclipse.osgi.framework.internal.core.StartLevelManager.shutdown(StartLevelManager.java:216) at org.eclipse.osgi.framework.internal.core.InternalSystemBundle.suspend(InternalSystemBundle.java:266) at org.eclipse.osgi.framework.internal.core.Framework.shutdown(Framework.java:685) at org.eclipse.osgi.framework.internal.core.Framework.close(Framework.java:583) at org.eclipse.core.runtime.adaptor.EclipseStarter.shutdown(EclipseStarter.java:409) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:200) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:592) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:559) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:514) at org.eclipse.equinox.launcher.Main.run(Main.java:1311) Caused by: org.xml.sax.SAXParseException: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:264) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:292) at org.eclipse.jdt.internal.corext.util.History.load(History.java:255) ... 25 more !SUBENTRY 1 org.eclipse.jdt.ui 4 4 2010-06-08 19:24:50.435 !MESSAGE Problems reading information from XML 'QualifiedTypeNameHistory.xml' !STACK 0 org.xml.sax.SAXParseException: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:264) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:292) at org.eclipse.jdt.internal.corext.util.History.load(History.java:255) at org.eclipse.jdt.internal.corext.util.History.load(History.java:166) at org.eclipse.jdt.internal.corext.util.QualifiedTypeNameHistory.<init>(QualifiedTypeNameHistory.java:33) at org.eclipse.jdt.internal.corext.util.QualifiedTypeNameHistory.getDefault(QualifiedTypeNameHistory.java:26) at org.eclipse.jdt.internal.ui.JavaPlugin.stop(JavaPlugin.java:602) at org.eclipse.osgi.framework.internal.core.BundleContextImpl$2.run(BundleContextImpl.java:843) at java.security.AccessController.doPrivileged(Native Method) at org.eclipse.osgi.framework.internal.core.BundleContextImpl.stop(BundleContextImpl.java:836) at org.eclipse.osgi.framework.internal.core.BundleHost.stopWorker(BundleHost.java:474) at org.eclipse.osgi.framework.internal.core.AbstractBundle.suspend(AbstractBundle.java:546) at org.eclipse.osgi.framework.internal.core.Framework.suspendBundle(Framework.java:1098) at org.eclipse.osgi.framework.internal.core.StartLevelManager.decFWSL(StartLevelManager.java:593) at org.eclipse.osgi.framework.internal.core.StartLevelManager.doSetStartLevel(StartLevelManager.java:261) at org.eclipse.osgi.framework.internal.core.StartLevelManager.shutdown(StartLevelManager.java:216) at org.eclipse.osgi.framework.internal.core.InternalSystemBundle.suspend(InternalSystemBundle.java:266) at org.eclipse.osgi.framework.internal.core.Framework.shutdown(Framework.java:685) at org.eclipse.osgi.framework.internal.core.Framework.close(Framework.java:583) at org.eclipse.core.runtime.adaptor.EclipseStarter.shutdown(EclipseStarter.java:409) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:200) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:592) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:559) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:514) at org.eclipse.equinox.launcher.Main.run(Main.java:1311)

    Read the article

  • Snort's problems in generating alert from Darpa 1998 intrusion detection dataset.

    - by manofseven2
    Hi. I’m working on DARPA 1998 intrusion detection dataset. When I run snort on this dataset (outside.tcpdump file), snort don’t generate complete list of alerts. It means snort start from last few hours of tcpdump file and generate alerts about this section of file and all of packets in first hours are ignored. Another problem in generatin alert is in time stamp of generated alerts. This means when I run snort on a specific day of dataset, snort insert incorrect time stamp for that alert. The configuration and command line statement and other information about my research are: Snort version: 2.8.6 Operating system: windows XP Rule version: snortrules-snapshot-2860_s.tar.gz -———————————————————————— Command line: snort_2.8.6 c D:\programs\Snort_2.8.6\snort\etc\snort.conf -r d:\users\amir\docs\darpa\training_data\week_3\monday\outside.tcpdump -l D:\users\amir\current-task\research\thesis\snort\890230 -————————————————————————— Snort.config Hi. I'm working on DARPA 1998 intrusion detection dataset. When I run snort on this dataset (outside.tcpdump file), snort don't generate complete list of alerts. It means snort start from last few hours of tcpdump file and generate alerts about this section of file and all of packets in first hours are ignored. Another problem in generatin alert is in time stamp of generated alerts. This means when I run snort on a specific day of dataset, snort insert incorrect time stamp for that alert. The configuration and command line statement and other information about my research are: Snort version: 2.8.6 Operating system: windows XP Rule version: snortrules-snapshot-2860_s.tar.gz Command line: snort_2.8.6 -c D:\programs\Snort_2.8.6\snort\etc\snort.conf -r d:\users\amir\docs\darpa\training_data\week_3\monday\outside.tcpdump -l D:\users\amir\current-task\research\thesis\snort\890230 Snort.config # Setup the network addresses you are protecting var HOME_NET any # Set up the external network addresses. Leave as "any" in most situations var EXTERNAL_NET any # List of DNS servers on your network var DNS_SERVERS $HOME_NET # List of SMTP servers on your network var SMTP_SERVERS $HOME_NET # List of web servers on your network var HTTP_SERVERS $HOME_NET # List of sql servers on your network var SQL_SERVERS $HOME_NET # List of telnet servers on your network var TELNET_SERVERS $HOME_NET # List of ssh servers on your network var SSH_SERVERS $HOME_NET # List of ports you run web servers on portvar HTTP_PORTS [80,1220,2301,3128,7777,7779,8000,8008,8028,8080,8180,8888,9999] # List of ports you want to look for SHELLCODE on. portvar SHELLCODE_PORTS !80 # List of ports you might see oracle attacks on portvar ORACLE_PORTS 1024: # List of ports you want to look for SSH connections on: portvar SSH_PORTS 22 # other variables, these should not be modified var AIM_SERVERS [64.12.24.0/23,64.12.28.0/23,64.12.161.0/24,64.12.163.0/24,64.12.200.0/24,205.188.3.0/24,205.188.5.0/24,205.188.7.0/24,205.188.9.0/24,205.188.153.0/24,205.188.179.0/24,205.188.248.0/24] var RULE_PATH ../rules var SO_RULE_PATH ../so_rules var PREPROC_RULE_PATH ../preproc_rules # Stop generic decode events: config disable_decode_alerts # Stop Alerts on experimental TCP options config disable_tcpopt_experimental_alerts # Stop Alerts on obsolete TCP options config disable_tcpopt_obsolete_alerts # Stop Alerts on T/TCP alerts config disable_tcpopt_ttcp_alerts # Stop Alerts on all other TCPOption type events: config disable_tcpopt_alerts # Stop Alerts on invalid ip options config disable_ipopt_alerts # Alert if value in length field (IP, TCP, UDP) is greater th elength of the packet # config enable_decode_oversized_alerts # Same as above, but drop packet if in Inline mode (requires enable_decode_oversized_alerts) # config enable_decode_oversized_drops # Configure IP / TCP checksum mode config checksum_mode: all config pcre_match_limit: 1500 config pcre_match_limit_recursion: 1500 # Configure the detection engine See the Snort Manual, Configuring Snort - Includes - Config config detection: search-method ac-split search-optimize max-pattern-len 20 # Configure the event queue. For more information, see README.event_queue config event_queue: max_queue 8 log 3 order_events content_length dynamicpreprocessor directory D:\programs\Snort_2.8.6\snort\lib\snort_dynamicpreprocessor dynamicengine D:\programs\Snort_2.8.6\snort\lib\snort_dynamicengine\sf_engine.dll # path to dynamic rules libraries #dynamicdetection directory /usr/local/lib/snort_dynamicrules preprocessor frag3_global: max_frags 65536 preprocessor frag3_engine: policy windows detect_anomalies overlap_limit 10 min_fragment_length 100 timeout 180 preprocessor stream5_global: max_tcp 8192, track_tcp yes, track_udp yes, track_icmp no preprocessor stream5_tcp: policy windows, detect_anomalies, require_3whs 180, \ overlap_limit 10, small_segments 3 bytes 150, timeout 180, \ ports client 21 22 23 25 42 53 79 109 110 111 113 119 135 136 137 139 143 \ 161 445 513 514 587 593 691 1433 1521 2100 3306 6665 6666 6667 6668 6669 \ 7000 32770 32771 32772 32773 32774 32775 32776 32777 32778 32779, \ ports both 80 443 465 563 636 989 992 993 994 995 1220 2301 3128 6907 7702 7777 7779 7801 7900 7901 7902 7903 7904 7905 \ 7906 7908 7909 7910 7911 7912 7913 7914 7915 7916 7917 7918 7919 7920 8000 8008 8028 8080 8180 8888 9999 preprocessor stream5_udp: timeout 180 preprocessor http_inspect: global iis_unicode_map unicode.map 1252 compress_depth 20480 decompress_depth 20480 preprocessor http_inspect_server: server default \ chunk_length 500000 \ server_flow_depth 0 \ client_flow_depth 0 \ post_depth 65495 \ oversize_dir_length 500 \ max_header_length 750 \ max_headers 100 \ ports { 80 1220 2301 3128 7777 7779 8000 8008 8028 8080 8180 8888 9999 } \ non_rfc_char { 0x00 0x01 0x02 0x03 0x04 0x05 0x06 0x07 } \ enable_cookie \ extended_response_inspection \ inspect_gzip \ apache_whitespace no \ ascii no \ bare_byte no \ directory no \ double_decode no \ iis_backslash no \ iis_delimiter no \ iis_unicode no \ multi_slash no \ non_strict \ u_encode yes \ webroot no preprocessor rpc_decode: 111 32770 32771 32772 32773 32774 32775 32776 32777 32778 32779 no_alert_multiple_requests no_alert_large_fragments no_alert_incomplete preprocessor bo preprocessor ftp_telnet: global inspection_type stateful encrypted_traffic no preprocessor ftp_telnet_protocol: telnet \ ayt_attack_thresh 20 \ normalize ports { 23 } \ detect_anomalies preprocessor ftp_telnet_protocol: ftp server default \ def_max_param_len 100 \ ports { 21 2100 3535 } \ telnet_cmds yes \ ignore_telnet_erase_cmds yes \ ftp_cmds { ABOR ACCT ADAT ALLO APPE AUTH CCC CDUP } \ ftp_cmds { CEL CLNT CMD CONF CWD DELE ENC EPRT } \ ftp_cmds { EPSV ESTA ESTP FEAT HELP LANG LIST LPRT } \ ftp_cmds { LPSV MACB MAIL MDTM MIC MKD MLSD MLST } \ ftp_cmds { MODE NLST NOOP OPTS PASS PASV PBSZ PORT } \ ftp_cmds { PROT PWD QUIT REIN REST RETR RMD RNFR } \ ftp_cmds { RNTO SDUP SITE SIZE SMNT STAT STOR STOU } \ ftp_cmds { STRU SYST TEST TYPE USER XCUP XCRC XCWD } \ ftp_cmds { XMAS XMD5 XMKD XPWD XRCP XRMD XRSQ XSEM } \ ftp_cmds { XSEN XSHA1 XSHA256 } \ alt_max_param_len 0 { ABOR CCC CDUP ESTA FEAT LPSV NOOP PASV PWD QUIT REIN STOU SYST XCUP XPWD } \ alt_max_param_len 200 { ALLO APPE CMD HELP NLST RETR RNFR STOR STOU XMKD } \ alt_max_param_len 256 { CWD RNTO } \ alt_max_param_len 400 { PORT } \ alt_max_param_len 512 { SIZE } \ chk_str_fmt { ACCT ADAT ALLO APPE AUTH CEL CLNT CMD } \ chk_str_fmt { CONF CWD DELE ENC EPRT EPSV ESTP HELP } \ chk_str_fmt { LANG LIST LPRT MACB MAIL MDTM MIC MKD } \ chk_str_fmt { MLSD MLST MODE NLST OPTS PASS PBSZ PORT } \ chk_str_fmt { PROT REST RETR RMD RNFR RNTO SDUP SITE } \ chk_str_fmt { SIZE SMNT STAT STOR STRU TEST TYPE USER } \ chk_str_fmt { XCRC XCWD XMAS XMD5 XMKD XRCP XRMD XRSQ } \ chk_str_fmt { XSEM XSEN XSHA1 XSHA256 } \ cmd_validity ALLO \ cmd_validity EPSV \ cmd_validity MACB \ cmd_validity MDTM \ cmd_validity MODE \ cmd_validity PORT \ cmd_validity PROT \ cmd_validity STRU \ cmd_validity TYPE preprocessor ftp_telnet_protocol: ftp client default \ max_resp_len 256 \ bounce yes \ ignore_telnet_erase_cmds yes \ telnet_cmds yes preprocessor smtp: ports { 25 465 587 691 } \ inspection_type stateful \ normalize cmds \ normalize_cmds { MAIL RCPT HELP HELO ETRN EHLO EXPN VRFY ATRN SIZE BDAT DEBUG EMAL ESAM ESND ESOM EVFY IDENT NOOP RSET SEND SAML SOML AUTH TURN DATA QUIT ONEX QUEU STARTTLS TICK TIME TURNME VERB X-EXPS X-LINK2STATE XADR XAUTH XCIR XEXCH50 XGEN XLICENSE XQUE XSTA XTRN XUSR } \ max_command_line_len 512 \ max_header_line_len 1000 \ max_response_line_len 512 \ alt_max_command_line_len 260 { MAIL } \ alt_max_command_line_len 300 { RCPT } \ alt_max_command_line_len 500 { HELP HELO ETRN EHLO } \ alt_max_command_line_len 255 { EXPN VRFY ATRN SIZE BDAT DEBUG EMAL ESAM ESND ESOM EVFY IDENT NOOP RSET } \ alt_max_command_line_len 246 { SEND SAML SOML AUTH TURN ETRN DATA RSET QUIT ONEX QUEU STARTTLS TICK TIME TURNME VERB X-EXPS X-LINK2STATE XADR XAUTH XCIR XEXCH50 XGEN XLICENSE XQUE XSTA XTRN XUSR } \ valid_cmds { MAIL RCPT HELP HELO ETRN EHLO EXPN VRFY ATRN SIZE BDAT DEBUG EMAL ESAM ESND ESOM EVFY IDENT NOOP RSET SEND SAML SOML AUTH TURN DATA QUIT ONEX QUEU STARTTLS TICK TIME TURNME VERB X-EXPS X-LINK2STATE XADR XAUTH XCIR XEXCH50 XGEN XLICENSE XQUE XSTA XTRN XUSR } \ xlink2state { enabled } preprocessor ssh: server_ports { 22 } \ autodetect \ max_client_bytes 19600 \ max_encrypted_packets 20 \ max_server_version_len 100 \ enable_respoverflow enable_ssh1crc32 \ enable_srvoverflow enable_protomismatch preprocessor dcerpc2: memcap 102400, events [co ] preprocessor dcerpc2_server: default, policy WinXP, \ detect [smb [139,445], tcp 135, udp 135, rpc-over-http-server 593], \ autodetect [tcp 1025:, udp 1025:, rpc-over-http-server 1025:], \ smb_max_chain 3 preprocessor dns: ports { 53 } enable_rdata_overflow preprocessor ssl: ports { 443 465 563 636 989 992 993 994 995 7801 7702 7900 7901 7902 7903 7904 7905 7906 6907 7908 7909 7910 7911 7912 7913 7914 7915 7916 7917 7918 7919 7920 }, trustservers, noinspect_encrypted # SDF sensitive data preprocessor. For more information see README.sensitive_data preprocessor sensitive_data: alert_threshold 25 output alert_full: alert.log output database: log, mysql, user=root password=123456 dbname=snort host=localhost include classification.config include reference.config include $RULE_PATH/local.rules include $RULE_PATH/attack-responses.rules include $RULE_PATH/backdoor.rules include $RULE_PATH/bad-traffic.rules include $RULE_PATH/chat.rules include $RULE_PATH/content-replace.rules include $RULE_PATH/ddos.rules include $RULE_PATH/dns.rules include $RULE_PATH/dos.rules include $RULE_PATH/exploit.rules include $RULE_PATH/finger.rules include $RULE_PATH/ftp.rules include $RULE_PATH/icmp.rules include $RULE_PATH/icmp-info.rules include $RULE_PATH/imap.rules include $RULE_PATH/info.rules include $RULE_PATH/misc.rules include $RULE_PATH/multimedia.rules include $RULE_PATH/mysql.rules include $RULE_PATH/netbios.rules include $RULE_PATH/nntp.rules include $RULE_PATH/oracle.rules include $RULE_PATH/other-ids.rules include $RULE_PATH/p2p.rules include $RULE_PATH/policy.rules include $RULE_PATH/pop2.rules include $RULE_PATH/pop3.rules include $RULE_PATH/rpc.rules include $RULE_PATH/rservices.rules include $RULE_PATH/scada.rules include $RULE_PATH/scan.rules include $RULE_PATH/shellcode.rules include $RULE_PATH/smtp.rules include $RULE_PATH/snmp.rules include $RULE_PATH/specific-threats.rules include $RULE_PATH/spyware-put.rules include $RULE_PATH/sql.rules include $RULE_PATH/telnet.rules include $RULE_PATH/tftp.rules include $RULE_PATH/virus.rules include $RULE_PATH/voip.rules include $RULE_PATH/web-activex.rules include $RULE_PATH/web-attacks.rules include $RULE_PATH/web-cgi.rules include $RULE_PATH/web-client.rules include $RULE_PATH/web-coldfusion.rules include $RULE_PATH/web-frontpage.rules include $RULE_PATH/web-iis.rules include $RULE_PATH/web-misc.rules include $RULE_PATH/web-php.rules include $RULE_PATH/x11.rules include threshold.conf -————————————————————————————- Can anyone help me to solve this problem? Thanks.

    Read the article

  • Best way of storing an "array of records" at design-time

    - by smartins
    I have a set of data that I need to store at design-time to construct the contents of a group of components at run-time. Something like this: type TVulnerabilityData = record Vulnerability: TVulnerability; Name: string; Description: string; ErrorMessage: string; end; What's the best way of storing this data at design-time for later retrieval at run-time? I'll have about 20 records for which I know all the contents of each "record" but I'm stuck on what's the best way of storing the data. The only semi-elegant idea I've come up with is "construct" each record on the unit's initialization like this: var VulnerabilityData: array[Low(TVulnerability)..High(TVulnerability)] of TVulnerabilityData; .... initialization VulnerabilityData[0].Vulnerability := vVulnerability1; VulnerabilityData[0].Name := 'Name of Vulnerability1'; VulnerabilityData[0].Description := 'Description of Vulnerability1'; VulnerabilityData[0].ErrorMessage := 'Error Message of Vulnerability1'; VulnerabilityData[1]...... ..... VulnerabilityData[20]...... Is there a better and/or more elegant solution than this? Thanks for reading.

    Read the article

  • Magento - edit form in custom module grid

    - by Shani1351
    I have a custom module and I have a working grid to menage the module items in the admin. My module file structore is : app\code\local\G4R\GroupSales\Block\Adminhtml\Groupsale\ I want to add an edit form so I can view and edit each item in the grid. I followed this tutorial : http://www.magentocommerce.com/wiki/5_-_modules_and_development/0_-_module_development_in_magento/custom_module_with_custom_database_table#part_2_-_backend_administration but when the edit page loads, instead of the tab content I get an error : Fatal error: Call to a member function setData() on a non-object in C:\xampp\htdocs\mystore\app\code\core\Mage\Adminhtml\Block\Widget\Form\Container.php on line 129 This is my code : /app/code/local/G4R/GroupSales/Block/Adminhtml/Groupsale/Edit.php <?php class G4R_GroupSales_Block_Adminhtml_Groupsale_Edit extends Mage_Adminhtml_Block_Widget_Form_Container { public function __construct() { parent::__construct(); $this->_objectId = 'id'; $this->_blockGroup = 'groupsale'; $this->_controller = 'adminhtml_groupsales'; $this->_updateButton('save', 'label', Mage::helper('groupsales')->__('Save Item')); $this->_updateButton('delete', 'label', Mage::helper('groupsales')->__('Delete Item')); } public function getHeaderText() { if( Mage::registry('groupsale_data') && Mage::registry('groupsale_data')->getId() ) { return Mage::helper('groupsales')->__("Edit Item '%s'", $this->htmlEscape(Mage::registry('groupsale_data')->getTitle())); } else { return Mage::helper('groupsales')->__('Add Item'); } } } /app/code/local/G4R/GroupSales/Block/Adminhtml/Groupsale/Edit/Form.php : <?php class G4R_GroupSales_Block_Adminhtml_Groupsale_Edit_Form extends Mage_Adminhtml_Block_Widget_Form { protected function _prepareForm() { $form = new Varien_Data_Form(array( 'id' => 'edit_form', 'action' => $this->getUrl('*/*/save', array('id' => $this->getRequest()->getParam('id'))), 'method' => 'post', ) ); $form->setUseContainer(true); $this->setForm($form); return parent::_prepareForm(); } } /app/code/local/G4R/GroupSales/Block/Adminhtml/Groupsale/Edit/Tabs.php: <?php class G4R_GroupSales_Block_Adminhtml_Groupsale_Edit_Tabs extends Mage_Adminhtml_Block_Widget_Tabs { public function __construct() { parent::__construct(); $this->setId('groupsales_groupsale_tabs'); $this->setDestElementId('edit_form'); $this->setTitle(Mage::helper('groupsales')->__('Groupsale Information')); } protected function _beforeToHtml() { $this->addTab('form_section', array( 'label' => Mage::helper('groupsales')->__('Item Information 1'), 'title' => Mage::helper('groupsales')->__('Item Information 2'), 'content' => $this->getLayout()->createBlock('groupsales/adminhtml_groupsale_edit_tab_form')->toHtml(), )); return parent::_beforeToHtml(); } } /app/code/local/G4R/GroupSales/Block/Adminhtml/Groupsale/Edit/Tab/Form.php : <?php class G4R_GroupSales_Block_Adminhtml_Groupsale_Edit_Tab_Form extends Mage_Adminhtml_Block_Widget_Form { protected function _prepareForm() { $form = new Varien_Data_Form(); $this->setForm($form); $fieldset = $form->addFieldset('groupsales_form', array('legend'=>Mage::helper('groupsales')->__('Item information 3'))); // $fieldset->addField('title', 'text', array( // 'label' => Mage::helper('groupsales')->__('Title'), // 'class' => 'required-entry', // 'required' => true, // 'name' => 'title', // )); // if ( Mage::getSingleton('adminhtml/session')->getGroupsaleData() ) { $form->setValues(Mage::getSingleton('adminhtml/session')->getGroupsaleData()); Mage::getSingleton('adminhtml/session')->setGroupsaleData(null); } elseif ( Mage::registry('groupsale_data') ) { $form->setValues(Mage::registry('groupsale_data')->getData()); } return parent::_prepareForm(); } } /app/code/local/G4R/GroupSales/controllers/Adminhtml/GroupsaleController.php : <?php class G4R_GroupSales_Adminhtml_GroupsaleController extends Mage_Adminhtml_Controller_Action { protected function _initAction() { $this->loadLayout() ->_setActiveMenu('groupsale/items') ->_addBreadcrumb(Mage::helper('adminhtml')->__('Items Manager'), Mage::helper('adminhtml')->__('Item Manager')); return $this; } public function indexAction() { $this->_initAction(); $this->_addContent($this->getLayout()->createBlock('groupsales/adminhtml_groupsale')); $this->renderLayout(); } public function editAction() { $groupsaleId = $this->getRequest()->getParam('id'); $groupsaleModel = Mage::getModel('groupsales/groupsale')->load($groupsaleId); if ($groupsaleModel->getId() || $groupsaleId == 0) { Mage::register('groupsale_data', $groupsaleModel); $this->loadLayout(); $this->_setActiveMenu('groupsale/items'); $this->_addBreadcrumb(Mage::helper('adminhtml')->__('Item Manager'), Mage::helper('adminhtml')->__('Item Manager')); $this->_addBreadcrumb(Mage::helper('adminhtml')->__('Item News'), Mage::helper('adminhtml')->__('Item News')); $this->getLayout()->getBlock('head')->setCanLoadExtJs(true); $this->_addContent($this->getLayout()->createBlock('groupsales/adminhtml_groupsale_edit')) ->_addLeft($this->getLayout()->createBlock('groupsales/adminhtml_groupsale_edit_tabs')); $this->renderLayout(); } else { Mage::getSingleton('adminhtml/session')->addError(Mage::helper('groupsales')->__('Item does not exist')); $this->_redirect('*/*/'); } } public function newAction() { $this->_forward('edit'); } public function saveAction() { if ( $this->getRequest()->getPost() ) { try { $postData = $this->getRequest()->getPost(); $groupsaleModel = Mage::getModel('groupsales/groupsale'); $groupsaleModel->setId($this->getRequest()->getParam('id')) ->setTitle($postData['title']) ->setContent($postData['content']) ->setStatus($postData['status']) ->save(); Mage::getSingleton('adminhtml/session')->addSuccess(Mage::helper('adminhtml')->__('Item was successfully saved')); Mage::getSingleton('adminhtml/session')->setGroupsaleData(false); $this->_redirect('*/*/'); return; } catch (Exception $e) { Mage::getSingleton('adminhtml/session')->addError($e->getMessage()); Mage::getSingleton('adminhtml/session')->setGroupsaleData($this->getRequest()->getPost()); $this->_redirect('*/*/edit', array('id' => $this->getRequest()->getParam('id'))); return; } } $this->_redirect('*/*/'); } public function deleteAction() { if( $this->getRequest()->getParam('id') > 0 ) { try { $groupsaleModel = Mage::getModel('groupsales/groupsale'); $groupsaleModel->setId($this->getRequest()->getParam('id')) ->delete(); Mage::getSingleton('adminhtml/session')->addSuccess(Mage::helper('adminhtml')->__('Item was successfully deleted')); $this->_redirect('*/*/'); } catch (Exception $e) { Mage::getSingleton('adminhtml/session')->addError($e->getMessage()); $this->_redirect('*/*/edit', array('id' => $this->getRequest()->getParam('id'))); } } $this->_redirect('*/*/'); } /** * Product grid for AJAX request. * Sort and filter result for example. */ public function gridAction() { $this->loadLayout(); $this->getResponse()->setBody( $this->getLayout()->createBlock('importedit/adminhtml_groupsales_grid')->toHtml() ); } } Any ideas what is the cause for the error?

    Read the article

  • Accessing Yahoo realtime stock quotes

    - by DVK
    There's a fairly easy way of retrieving 15-minute delayed quotes off of Yahoo! Finance web site ("quotes.csv" API). However, so far I was unable to find any info on how to access real-time quotes. The hang-ups with real-time quotes are: Only available to logged-in user No API Non-obvious how to scrape the info - I'm somewhat convinced they are placed on the page by some weird Ajax call. So I was wondering if anyone had managed to develop a publically available solution to retrieve real-time quotes for a stock from Yahoo! Finance. Notes: Implementation language/framework need is flexible but Perl or Excel is highly preferred. Assume that security is not an issue - I'm willing to supply yahoo userid and pasword, even in cleartext. I'm not 100% hung up on Yahoo - they are merely the only provider of free realtime stock quotes I'm familiar with. if the same thing can be done with Google Finance, I'd be just as happy. This is for a personal project, so scalability/fault tolerance/etc... are not important. I'm looking for a "do the whole retrieval" library ideally, but if I'm pointed to partial solutions (e.g. how to retrieve info from Yahoo's user-logged-in pages; how to scrape realtime quotes from Yahoo's page) I can fill in the blanks. I saw Finance::YahooQuote but it does not seem to allow you to supply log-in information and appears to use the lagging quotes.csv API Thanks!

    Read the article

  • New to MVVM - Best practices for seperating Data processing thread and UI Thread?

    - by OffApps Cory
    Good day. I have started messing around with the MVVP pattern, and I am having some problems with UI responsiveness versus data processing. I have a program that tracks packages. Shipment and package entities are persisted in SQL database, and are displayed in a WPF view. Upon initial retrieval of the records, there is a noticeable pause before displaying the new shipments view, and I have not even implemented the code that counts shipments that are overdue/active yet (which will necessitate a tracking check via web service, and a lot of time). I have built this with the Ocean framework, and all appears to be doing well, except when I first started my foray into multi-threading. It broke, and it appeared to break something in Ocean... Here is what I did: Private QueryThread As New System.Threading.Thread(AddressOf GetShipments) Public Sub New() ' Insert code required on object creation below this point. Me.New(ViewManagerService.CreateInstance, ViewModelUIService.CreateInstance) 'Perform initial query of shipments 'QueryThread.Start() GetShipments() Console.WriteLine(Me.Shipments.Count) End Sub Public Sub New(ByVal objIViewManagerService As IViewManagerService, ByVal objIViewModelUIService As IViewModelUIService) MyBase.New(objIViewModelUIService) End Sub Public Sub GetShipments() Dim InitialResults = From shipment In db.Shipment.Include("Packages") _ Select shipment Me.Shipments = New ShipmentsCollection(InitialResults, db) End Sub So I declared a new Thread, assigned it the GetShipments method and instanced it in the default constructor. Ocean freaks out at this, so there must be a better way of doing it. I have not had the chance to figure out the usage of the SQL ORM thing in Ocean so I am using Entity Framework (perhaps one of these days i will look at NHibernate or something too). Any information would be greatly appreciated. I have looked at a number of articles and they all have examples of simple uses. Some have mentioned the Dispatcher, but none really go very far into how it is used. Anyone know any good tutorials? Cory

    Read the article

  • Does Perl's Net::Cassandra module support UTF-8?

    - by knorv
    I've run into a really strange UTF-8 problem with Net::Cassandra::Easy (which is built upon Net::Cassandra): UTF-8 strings written to Cassandra are garbled upon retrieval. The following code shows the problem: use strict; use utf8; use warnings; use Net::Cassandra::Easy; binmode(STDOUT, ":utf8"); my $key = "some_key"; my $column = "some_column"; my $set_value = "\x{2603}"; my $cassandra = Net::Cassandra::Easy->new(keyspace => "Keyspace1", server => "localhost"); $cassandra->connect(); $cassandra->mutate([$key], family => "Standard1", insertions => { $column => $set_value }); my $result = $cassandra->get([$key], family => "Standard1", standard => 1); my $get_value = $result->{$key}->{"Standard1"}->{$column}; if ($set_value eq $get_value) { # this is the path I want. print "OK: $set_value == $get_value\n"; } else { # this is the path I get. print "ERR: $set_value != $get_value\n"; } When running the code above $set_value eq $get_value evaluates to false. What am I doing wrong?

    Read the article

  • Is Berkeley DB XML a viable database backend?

    - by w00t
    Apparently, BDB-XML has been around since at least 2003 but I only recently stumbled upon it on Oracle's website: Berkeley DB XML. Here's the blurb: Oracle Berkeley DB XML is an open source, embeddable XML database with XQuery-based access to documents stored in containers and indexed based on their content. Oracle Berkeley DB XML is built on top of Oracle Berkeley DB and inherits its rich features and attributes. Like Oracle Berkeley DB, it runs in process with the application with no need for human administration. Oracle Berkeley DB XML adds a document parser, XML indexer and XQuery engine on top of Oracle Berkeley DB to enable the fastest, most efficient retrieval of data. To me it seems that the underlying ideas are technically sound and probably more mature than the newer document-based DBs like CouchDB or MongoDB. It has support for C, C++, Ruby and Perl, as far as I can determine. It even has HA-capabilities like automatic replication using a master/slave model with automatic election. However, I can't seem to find any projects that use it. Is there something fundamentally wrong with it? Is the license too onerous? Is it too complicated? Why is it not being used?

    Read the article

  • Would OpenID or OAuth work for authorization/authentication on a distributed web service?

    - by David Eyk
    We're in the early stages of designing a RESTful/resource-oriented web service API for a computational lingustics application. Because many of the resources we plan to serve are rights-encumbered, a key design decision has been to specify the platform so that each resource provider can expose their own web service that complies with the API spec. This way, the rights owner maintains control over their content (and thus the ability to throttle or deny access at will) and a direct relationship with the consumer, while still being able to participate in in the collaborative network. At the same time, to simplify the job of writing a client for this service, we want to allow a client access to the distributed service through one end-point, with the server handling content negotiation and retrieval from the appropriate providers. Right now, we're at an impasse on authentication/authorization schemes. One of our number has argued for the (technical) simplicity of a central authentication registry, but others are concerned about the organizational complexity of such a scheme. It seems to me, based on an albeit limited understanding of the technologies, that a combination of OpenID and OAuth would do the trick, with a client authenticating with the end-point via OpenID, and the server taking action on the user's behalf with the various content providers using OAuth. I've only ever seen implementations (e.g. stackoverflow, twitter, etc.) where a human was present to intervene, and I still need to do more research on these technologies. Would a scheme like this work for an automated web service, or would it make the client too difficult to implement and operate?

    Read the article

  • Difficulty setting ArrayList to java.sql.Blob to save in DB using hibernate

    - by me_here
    I'm trying to save a java ArrayList in a database (H2) by setting it as a blob, for retrieval later. If this is a bad approach, please say - I haven't been able to find much information on this area. I have a column of type Blob in the database, and Hibernate maps to this with java.sql.Blob. The code I'm struggling with is: Drawings drawing = new Drawings(); try { ByteArrayOutputStream bos = new ByteArrayOutputStream(); ObjectOutputStream oos = null; oos = new ObjectOutputStream(bos); oos.writeObject(plan.drawingPane21.pointList); byte[] buff = bos.toByteArray(); Blob drawingBlob = null; drawingBlob.setBytes(0, buff); drawing.setDrawingObject(drawingBlob); } catch (Exception e){ System.err.println(e); } The object I'm trying to save into a blob (plan.drawingPane21.pointList) is of type ArrayList<DrawingDot>, DrawingDot being a custom class implementing Serializable. My code is failing on the line drawingBlob.setBytes(0, buff); with a NullPointerException. Help appreciated.

    Read the article

  • how to return multiple array items using json/jquery

    - by Scarface
    Hey guys, quick question, I have a query that will usually return multiple results from a database, while I know how to return one result, I am not sure how to return multiple in jquery. I just want to take each of the returned results and run them through my prepare function. I have been trying to use 'for' to handle the array of data but I don't think it can work since I am returning different array values. If anyone has any suggestions, I would really appreciate it. JQUERY RETRIEVAL for(i=0; i < json.rows; i++) { $('#users_online').append(online_users(json[i])); $('#online_list-' + count2).fadeIn(1500); } PHP PROCESSING $qryuserscount1="SELECT active_users.username,COUNT(scrusersonline.id) AS rows FROM scrusersonline LEFT JOIN active_users ON scrusersonline.id=active_users.id WHERE topic_id='$topic_id'"; $userscount1=mysql_query($qryuserscount1); while ($row = mysql_fetch_array($userscount1)) { $onlineuser= $row['username']; $rows=$row['rows']; if ($username==$onlineuser){ $str2= "<a href=\"statistics.php?user=$onlineuser\"><div class=\"me\">$onlineuser</div></a>"; } else { $str2= "<b><a href=\"statistics.php?user=$onlineuser\"><div class=\"others\">$onlineuser</div></a></b>"; } $data['rows']=$rows; $data['entry']=$str1.$str2; }

    Read the article

  • Classic ASP to WCF using the Service Moniker

    - by Jab
    I am trying to consume a WCF logging service from classic ASP without deploy a Com wrapper. I found a method of doing so here. Here is the vb script, simplified. Dim addr addr = "service:mexAddress=""net.pipe://localhost/Services/Logging/LoggingManager/Mex""," _ & "address=""net.pipe://localhost/Services/Logging/LoggingManager/classic/""," _ & "contract=""ILoggingManagerClassic"", contractNamespace=""http://Services.Logging.Classic/""," _ & "binding=""NetNamedPipeBinding_ILoggingManagerClassic"", bindingNamespace=""http://Services.Logging.Classic/""" set objErrorLogger = GetObject(addr) Dim strError : strError = objErrorLogger.LogError("blahblah") This works on Server 2008, but fails with this error on Server 2003. Failed to do mex retrieval:Metadata contains a reference that cannot be resolved: net.pipe://localhost/Services/Logging/LoggingManager/Mex.. Only when running through ASP does it fail, a sample VBS file on the same machine using the same code works fine. I think it may be permission related, but don't know where to begin. Anyone have any ideas? EDIT - let me clarify that the WCF host is a windows service running as NETWORK SERVICE. If this belongs on server fault, a moderator can move it. I have an account there as well.

    Read the article

  • Persisting complex data between postbacks in ASP.NET MVC

    - by Robert Wagner
    I'm developing an ASP.NET MVC 2 application that connects to some services to do data retrieval and update. The services require that I provide the original entity along with the updated entity when updating data. This is so it can do change tracking and optimistic concurrency. The services cannot be changed. My problem is that I need to somehow store the original entity between postbacks. In WebForms, I would have used ViewState, but from what I have read, that is out for MVC. The original values do not have to be tamper proof as the services treat them as untrusted. The entities would be (max) 1k and it is an intranet app. The options I have come up are: Session - Ruled out - Store the entity in the Session, but I don't like this idea as there are no plans to share session between URL - Ruled out - Data is too big HiddenField - Store the serialized entity in a hidden field, perhaps with encryption/encoding HiddenVersion - The entities have a (SQL) version field on them, which I could put into a hidden field. Then on a save I get "original" entity from the services and compare the versions, doing my own optimistic concurrency. Cookies - Like 3 or 4, but using a cookie instead of a hidden field I'm leaning towards option 4, although 3 would be simpler. Are these valid options or am I going down the wrong track? Is there a better way of doing this?

    Read the article

  • (C#) Get index of current foreach iteration

    - by Graphain
    Hi, Is there some rare language construct I haven't encountered (like the few I've learned recently, some on Stack Overflow) in C# to get a value representing the current iteration of a foreach loop? For instance, I currently do something like this depending on the circumstances: int i=0; foreach (Object o in collection) { ... i++; } Answers: @bryansh: I am setting the class of an element in a view page based on the position in the list. I guess I could add a method that gets the CSSClass for the Objects I am iterating through but that almost feels like a violation of the interface of that class. @Brad Wilson: I really like that - I've often thought about something like that when using the ternary operator but never really given it enough thought. As a bit of food for thought it would be nice if you could do something similar to somehow add (generically to all IEnumerable objects) a handle on the enumerator to increment the value that an extension method returns i.e. inject a method into the IEnumerable interface that returns an iterationindex. Of course this would be blatant hacks and witchcraft... Cool though... @crucible: Awesome I totally forgot to check the LINQ methods. Hmm appears to be a terrible library implementation though. I don't see why people are downvoting you though. You'd expect the method to either use some sort of HashTable of indices or even another SQL call, not an O(N) iteration... (@Jonathan Holland yes you are right, expecting SQL was wrong) @Joseph Daigle: The difficulty is that I assume the foreach casting/retrieval is optimised more than my own code would be. @Jonathan Holland: Ah, cheers for explaining how it works and ha at firing someone for using it.

    Read the article

  • Unable to get ncName and netBIOSName Properties

    - by Randz
    I've some code on the net regarding retrieval of NetBIOSName (Pre-windows 2000 domain name) of an Active Directory Domain. Here's my code sample: Me._rootDSE = New System.DirectoryServices.DirectoryEntry("GC://RootDSE", "", "") Dim results As System.DirectoryServices.SearchResultCollection = Nothing Dim ADSPath As String = "GC://CN=Partitions," + Me._rootDSE.Properties("configurationNamingContext").Value.ToString() Dim adse As System.DirectoryServices.DirectoryEntry = New System.DirectoryServices.DirectoryEntry(ADSPath, "", "") Dim searcher As System.DirectoryServices.DirectorySearcher searcher = New System.DirectoryServices.DirectorySearcher(adse) searcher.SearchScope = DirectoryServices.SearchScope.OneLevel searcher.Filter = "(&(objectClass=crossRef)(systemflags=3))" searcher.PropertiesToLoad.Add("netbiosname") searcher.PropertiesToLoad.Add("ncname") results = searcher.FindAll() If results.Count > 0 Then For Each sr As System.DirectoryServices.SearchResult In results Dim de As System.DirectoryServices.DirectoryEntry = sr.GetDirectoryEntry() 'netbiosname and ncname properties returns nothing System.Diagnostics.Trace.WriteLine(sr.GetDirectoryEntry().Properties("netbiosname").Value.ToString()) System.Diagnostics.Trace.WriteLine(sr.GetDirectoryEntry().Properties("ncname").Value.ToString()) Next End If When I am using the "(&(objectClass=crossRef)(systemFlags=3))" filter, I am not getting any result, but when I removed the systemFlags filter, I get some results. However, on the search results that I got, I still cannot access the values of ncName and NetBIOSName properties. I can get other properties like distinguishedName and CN of the search result properly. Any idea on what I might be doing wrong, or where to look further?

    Read the article

  • SQL View with Data from two tables

    - by Alex
    Hello! I can't seem to crack this - I have two tables (Persons and Companies), and I'm trying to create a view that: 1) shows all persons 2) also returns companies by themselves once, regardless of how many persons are related to it 3) orders by name across both tables To clarify, some sample data: (Table: Companies) Id Name 1 Banana 2 ABC Inc. 3 Microsoft 4 Bigwig (Table: Persons) Id Name RelatedCompanyId 1 Joe Smith 3 2 Justin 3 Paul Rudd 4 4 Anjolie 5 Dustin 4 The output I'm looking for is something like this: Name PersonName CompanyName RelatedCompanyId ABC Inc. NULL ABC Inc. NULL Anjolie Anjolie NULL NULL Banana NULL Banana NULL Bigwig NULL Bigwig NULL Dustin Dustin Bigwig 4 Joe Smith Joe Smith Microsoft 3 Justin Justin NULL NULL Microsoft NULL Microsoft NULL Paul Rudd Paul Rudd Bigwig 4 As you can see, the new "Name" column is ordered across both tables (the company names appear correctly in between the person names), and each company appears exactly once, regardless of how many people are related to it. Can this even be done in SQL?! P.S. I'm trying to create a view so I can use this later for easy data retrieval, fulltext indexing and make the programming side simpler by just querying the view.

    Read the article

  • Comparing RPG to C# and SQL.

    - by Kevin
    In an RPG program (One of IBM's languages on the AS/400) I can "chain" out to a file to see if a record (say, a certain customer record) exists in the file. If it does, then I can update that record instantly with new data. If the record doesn't exist, I can write a new record. The code would look like this: Customer Chain CustFile 71 ;turn on indicator 71 if not found if *in71 ;if 71 is "on" eval CustID = Customer; eval CustCredit = 10000; write CustRecord else ;71 not on, record found. CustCredit = 10000; update CustRecord endif Not being real familiar with SQL/C#, I'm wondering if there is a way to do a random retrieval from a file (which is what "chain" does in RPG). Basically I want to see if a record exists. If it does, update the record with some new information. If it does not, then I want to write a new record. I'm sure it's possible, but not quite sure how to go about doing it. Any advice would be greatly appreciated.

    Read the article

  • custom wordpress page

    - by sharon
    I'd like to implement a custom post retrieval page in wordpress. Basically, I'm using AJAX to call this page that will be passed a post ID and retrieve certain data from that post. Note: please don't mistake this as a template question. I do not want a template for one single page -- I am looking to make this page query multiple different posts based on postID and return certain data from that post. So I tried creating a page <?php $args=array( 'p'=>'77' ); $friends = new WP_Query($args); ?> <?php if ($friends->have_posts()) : the_post(); ?> <?php the_title(); ?> <?php the_content(); ?> <?php else: ?> <p>Sorry, no posts are available.</p> <?php endif; ?> But this does not work since it is not loading in the wp functions to handle the query. Thanks in advance for any help!

    Read the article

  • Informational messages returned with WCF involved

    - by DT
    This question is about “informational messages” and having them flow from a “back end” to a “front end” in a consistent manner. The quick question is “how do you do it”? Background: Web application using WCF to call back end services. In the back end service a “message” may occur. Now, the reason for this “message” may be a number of reasons, but for this discussion let’s assume that a piece of data was looked at and it was determined that the caller should be given back some information regarding it. This “informational” message may occur during a save and also may occur during retrieval of information. Again, the message is not what is important here, but the fact that there is some informational messages to give back under a number of different scenarios. From a team perspective we all want to return these “messages” in a standard way all of the time. Now, in the past this “standard way” has been done different ways by different people. Here are some possibilities: 1) Every operation has a “ref” parameter at the end that contains these messages 2) Every method returns these messages… however, this only kind of works for “Save” methods as one would think that “Retrieve” methods should return actual data and not messages 3) Some approach using the call context so as to not "pollute" all message signatures with something; however, with WCF in the picture this complicates things. That is, going back to the messages go on a header? Question: Back to my question then… how are others returning “messages” such as what was described above back through tiers of an application, over WCF and back to the caller?

    Read the article

  • Using .NET XmlSerializer with get properties and setter functions

    - by brone
    I'm trying to use XmlSerializer from C# to save out a class that has some values that are read by properties (the code being just a simple retrieval of field value) but set by setter functions (since there is a delegate called if the value changes). What I'm currently doing is this sort of thing. The intended use is to use the InT property to read the value, and use SetInT to set it. Setting it has side-effects, so a method is more appropriate than a property here. XmlSerializationOnly_InT exists solely for the benefit of the XmlSerializer (hence the name), and shouldn't be used by normal code. class X { public double InT { get { return _inT; } } public void SetInT(double newInT) { if (newInT != _inT) { _inT = newInT; Changed();//includes delegate call; potentially expensive } } private double _inT; // not called by normal code, as the property set is not just a simple // field set or two. [XmlElement(ElementName = "InT")] public double XmlSerializationOnly_InT { get { return InT; } set { SetInT(value); } } } This works, it's easy enough to do, and the XML file looks like you'd expect. It's manual labour though, and a bit ugly, so I'm only somewhat satisfied. What I'd really like is to be able to tell the XML serialization to read the value using the property, and set it using the setter function. Then I wouldn't need XmlSerializationOnly_InT at all. I seem to be following standard practise by distinguishing between property sets and setter functions in this way, so I'm sure I'm not the only person to have encountered this (though google suggests I might be). What have others done in this situation? Is there some easy way to persuade the XmlSerializer to handle this sort of thing better? If not, is there perhaps some other easy way to do it?

    Read the article

  • Outlook 2010 Retrieving and restricting appointments programmatically causing recurrences to be incl

    - by Mike Dearing
    I wrote a winforms app that uses Microsoft.Office.Interop.Outlook to retrieve and restrict appointments based upon the date range entered by a user. This worked fine with Outlook 2007 installed, however now that some users have updated to Outlook 2010 the appointment retrieval is pulling back incorrect appointments along with the correct ones falling within the specified date range. The additional incorrect appointments being retrieved always appear to be recurring appointments. I was wondering if this is a known bug and if so what exactly is happening that is causing these additional recurring appointments to come in? I'd rather not have to throw in a workaround where I step through the items after they have been restricted and remove the extra ones, when this functionality works fine with 2007. Note: I've not recompiled or updated any code when experiencing this issue, just running the old program. This is the spot in my code where appointments are being restricted. This is similar to the way advised in the following msdn link: http://msdn.microsoft.com/en-us/library/bb611267.aspx Microsoft.Office.Interop.Outlook.Items outlookItems = outlookMapiFolder.Items.Restrict( "[Start] >= '" + outlookImport.startDay.ToString("g") + "' AND [Start] <= '" + outlookImport.endDay.ToString("g") + "'"); outlookItems.Sort("[Start]", Type.Missing); outlookItems.IncludeRecurrences = true;

    Read the article

  • Need a code snippet for backward paging...

    - by Ali
    Hi guys I'm in a bit on a fix here. I know how easy it is to build simple pagination links for dynamic pages whereby you can navigate between partial sets of records from sql queries. However the situation I have is as below: COnsider that I wish to paginate between records listed in a flat file - I have no problem with the retrieval and even the pagination assuming that the flat file is a csv file with the first field as an id and new reocrds on new lines. However I need to make a pagination system which paginates backwards i.e I want the LAST entry in the file to appear as the first as so forth. Since I don't have the power of sql to help me here I'm kinda stuck - all I have is a fixed sequence which needs to be paginated, also note that the id mentioned as first field is not necessarily numeric so forget about sorting by numerics here. I basically need a way to loop through the file but backwards and paginate it as such. How can I do that - I'm working in php - I just need the code to loop through and paginate i.e how to tell which is the offset and which is the current page etc.

    Read the article

  • Pattern for limiting number of simultaneous asynchronous calls

    - by hitch
    I need to retrieve multiple objects from an external system. The external system supports multiple simultaneous requests (i.e. threads), but it is possible to flood the external system - therefore I want to be able to retrieve multiple objects asynchronously, but I want to be able to throttle the number of simultaneous async requests. i.e. I need to retrieve 100 items, but don't want to be retrieving more than 25 of them at once. When each request of the 25 completes, I want to trigger another retrieval, and once they are all complete I want to return all of the results in the order they were requested (i.e. there is no point returning the results until the entire call is returned). Are there any recommended patterns for this sort of thing? Would something like this be appropriate (pseudocode, obviously)? private List<externalSystemObjects> returnedObjects = new List<externalSystemObjects>; public List<externalSystemObjects> GetObjects(List<string> ids) { int callCount = 0; int maxCallCount = 25; WaitHandle[] handles; foreach(id in itemIds to get) { if(callCount < maxCallCount) { WaitHandle handle = executeCall(id, callback); addWaitHandleToWaitArray(handle) } else { int returnedCallId = WaitHandle.WaitAny(handles); removeReturnedCallFromWaitHandles(handles); } } WaitHandle.WaitAll(handles); return returnedObjects; } public void callback(object result) { returnedObjects.Add(result); }

    Read the article

< Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >