Search Results

Search found 266 results on 11 pages for 'sybase iq'.

Page 10/11 | < Previous Page | 6 7 8 9 10 11  | Next Page >

  • Myths about Coding Craftsmanship part 2

    - by tom
    Myth 3: The source of all bad code is inept developers and stupid people When you review code is this what you assume?  Shame on you.  You are probably making assumptions in your code if you are assuming so much already.  Bad code can be the result of any number of causes including but not limited to using dated techniques (like boxing when generics are available), not following standards (“look how he does the spacing between arguments!” or “did he really just name that variable ‘bln_Hello_Cats’?”), being redundant, using properties, methods, or objects in a novel way (like switching on button.Text between “Hello World” and “Hello World “ //clever use of space character… sigh), not following the SOLID principals, hacking around assumptions made in earlier iterations / hacking in features that should be worked into the overall design.  The first two issues, while annoying are pretty easy to spot and can be fixed so easily.  If your coding team is made up of experienced professionals who are passionate about staying current then these shouldn’t be happening.  If you work with a variety of skills, backgrounds, and experience then there will be some of this stuff going on.  If you have an opportunity to mentor such a developer who is receptive to constructive criticism don’t be a jerk; help them and the codebase will improve.  A little patience can improve the codebase, your work environment, and even your perspective. The novelty and redundancy I have encountered has often been the use of creativity when language knowledge was perceived as unavailable or too time consuming.  When developers learn on the job you get a lot of this.  Rather than going to MSDN developers will use what they know.  Depending on the constraints of their assignment hacking together what they know may seem quite practical.  This was not stupid though I often wonder how much time is actually “saved” by hacking.  These issues are often harder to untangle if we ever do.  They can also grow out of control as we write hack after hack to make it work and get back to some development that is satisfying. Hacking upon an existing hack is what I call “feeding the monster”.  Code monsters are anti-patterns and hacks gone wild.  The reason code monsters continue to get bigger is that they keep growing in scope, touching more and more of the application.  This is not the result of dumb developers. It is probably the result of avoiding design, not taking the time to understand the problems or anticipate or communicate the vision of the product.  If our developers don’t understand the purpose of a feature or product how do we expect potential customers to do so? Forethought and organization are often what is missing from bad code.  Developers who do not use the SOLID principals should be encouraged to learn these principals and be given guidance on how to apply them.  The time “saved” by giving hackers room to hack will be made up for and then some. Not as technical debt but as shoddy work that if not replaced will be struggled with again and again.  Bad code is not the result of dumb developers (usually) it is the result of trying to do too much without the proper resources and neglecting the right thing that needs doing with the first thoughtless thing that comes into our heads. Object oriented code is all about relationships between objects.  Coders who believe their coworkers are all fools tend to write objects that are difficult to work with, not eager to explain themselves, and perform erratically and irrationally.  If you constantly find you are surrounded by idiots you may want to ask yourself if you are being unreasonable, if you are being closed minded, of if you have chosen the right profession.  Opening your mind up to the idea that you probably work with rational, well-intentioned people will probably make you a better coder and it might even make you less grumpy.  If you are surrounded by jerks who do not engage in the exchange of ideas who do not care about their customers or the durability of the code you are building together then I suggest you find a new place to work.  Myth 4: Customers don’t care about “beautiful” code Craftsmanship is customer focused because it means that the job was done right, the product will withstand the abuse, modifications, and scrutiny of our customers.  Users can appreciate a predictable timeline for a release, a product delivered on time and on budget, a feature set that does not interfere with the task(s) it is supporting, quick turnarounds on exception messages, self healing issues, and less issues.  These are all hindered by skimping on craftsmanship.  When we write data access and when we write reusable code.   What do you think?  Does bad code come primarily from low IQ individuals?  Do customers care about beautiful code?

    Read the article

  • PHP 5.3 Not Logging

    - by BHare
    I have set error_log = "/var/log/apache2/php_errors.log" and made sure errors were being logged. I have set the file to be owned by the www-data owner and group and even set the permissions to 777. I have confirmed with phpinfo() that the error_log is correctly set, however The logging still only happens in my vhost's apache error log. The following is my php.ini for 5.3.3-7 on Debian Squeeze Apache 2: The top is populated with comments on what I have been interested, or have changed. I have deleted all comments to save space. Full versions here: http://pastebin.com/AhWLiQBR [PHP] ;short_open_tag = On ;allow_call_time_pass_reference = On ;error_reporting = E_ALL & ~E_NOTICE & ~E_DEPRECATED ;display_errors = On ;display_startup_errors = Off ;log_errors = On ;html_errors = On error_log = "/var/log/apache2/php_errors.log" engine = On short_open_tag = On asp_tags = Off precision = 14 y2k_compliance = On output_buffering = 4096 zlib.output_compression = Off implicit_flush = Off unserialize_callback_func = serialize_precision = 100 allow_call_time_pass_reference = On safe_mode = Off safe_mode_gid = Off safe_mode_include_dir = safe_mode_exec_dir = safe_mode_allowed_env_vars = PHP_ safe_mode_protected_env_vars = LD_LIBRARY_PATH disable_functions = disable_classes = expose_php = On max_execution_time = 30 max_input_time = 60 memory_limit = 128M error_reporting = E_ALL & ~E_NOTICE & ~E_DEPRECATED display_errors = On display_startup_errors = Off log_errors = On log_errors_max_len = 1024 ignore_repeated_errors = Off ignore_repeated_source = Off report_memleaks = On track_errors = Off html_errors = On variables_order = "GPCS" request_order = "GPC" register_globals = Off register_long_arrays = Off register_argc_argv = Off auto_globals_jit = On post_max_size = 100M magic_quotes_gpc = Off magic_quotes_runtime = Off magic_quotes_sybase = Off auto_prepend_file = auto_append_file = default_mimetype = "text/html" doc_root = user_dir = enable_dl = Off file_uploads = On upload_tmp_dir = /tmp upload_max_filesize = 100M max_file_uploads = 20 allow_url_fopen = On allow_url_include = Off default_socket_timeout = 60 [Date] [filter] [iconv] [intl] [sqlite] [sqlite3] [Pcre] [Pdo] [Pdo_mysql] pdo_mysql.cache_size = 2000 pdo_mysql.default_socket= [Phar] [Syslog] define_syslog_variables = Off [mail function] SMTP = localhost smtp_port = 25 mail.add_x_header = On [SQL] sql.safe_mode = Off [ODBC] odbc.allow_persistent = On odbc.check_persistent = On odbc.max_persistent = -1 odbc.max_links = -1 odbc.defaultlrl = 4096 odbc.defaultbinmode = 1 [Interbase] ibase.allow_persistent = 1 ibase.max_persistent = -1 ibase.max_links = -1 ibase.timestampformat = "%Y-%m-%d %H:%M:%S" ibase.dateformat = "%Y-%m-%d" ibase.timeformat = "%H:%M:%S" [MySQL] mysql.allow_local_infile = On mysql.allow_persistent = On mysql.cache_size = 2000 mysql.max_persistent = -1 mysql.max_links = -1 mysql.default_port = mysql.default_socket = mysql.default_host = mysql.default_user = mysql.default_password = mysql.connect_timeout = 60 mysql.trace_mode = Off [MySQLi] mysqli.max_persistent = -1 mysqli.allow_persistent = On mysqli.max_links = -1 mysqli.cache_size = 2000 mysqli.default_port = 3306 mysqli.default_socket = mysqli.default_host = mysqli.default_user = mysqli.default_pw = mysqli.reconnect = Off [mysqlnd] mysqlnd.collect_statistics = On mysqlnd.collect_memory_statistics = Off [OCI8] [PostgresSQL] pgsql.allow_persistent = On pgsql.auto_reset_persistent = Off pgsql.max_persistent = -1 pgsql.max_links = -1 pgsql.ignore_notice = 0 pgsql.log_notice = 0 [Sybase-CT] sybct.allow_persistent = On sybct.max_persistent = -1 sybct.max_links = -1 sybct.min_server_severity = 10 sybct.min_client_severity = 10 [bcmath] bcmath.scale = 0 [browscap] [Session] session.save_handler = files session.use_cookies = 1 session.use_only_cookies = 1 session.name = PHPSESSID session.auto_start = 0 session.cookie_lifetime = 0 session.cookie_path = / session.cookie_domain = session.cookie_httponly = session.serialize_handler = php session.gc_probability = 0 session.gc_divisor = 1000 session.gc_maxlifetime = 1440 session.bug_compat_42 = Off session.bug_compat_warn = Off session.referer_check = session.entropy_length = 0 session.cache_limiter = nocache session.cache_expire = 180 session.use_trans_sid = 0 session.hash_function = 0 session.hash_bits_per_character = 5 url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=fakeentry" [MSSQL] mssql.allow_persistent = On mssql.max_persistent = -1 mssql.max_links = -1 mssql.min_error_severity = 10 mssql.min_message_severity = 10 mssql.compatability_mode = Off mssql.secure_connection = Off [Assertion] [COM] [mbstring] [gd] [exif] [Tidy] tidy.clean_output = Off [soap] soap.wsdl_cache_enabled=1 soap.wsdl_cache_dir="/tmp" soap.wsdl_cache_ttl=86400 soap.wsdl_cache_limit = 5 [sysvshm] [ldap] ldap.max_links = -1 [mcrypt] [dba]

    Read the article

  • The Interaction between Three-Tier Client/Server Model and Three-Tier Application Architecture Model

    The three-tier client/server model is a network architectural approach currently used in modern networking. This approach divides a network in to three distinct components. Three-Tier Client/Server Model Components Client Component Server Component Database Component The Client Component of the network typically represents any device on the network. A basic example of this would be computer or another network/web enabled devices that are connected to a network. Network clients request resources on the network, and are usually equipped with a user interface for the presentation of the data returned from the Server Component. This process is done through the use of various software clients, and example of this can be seen through the use of a web browser client. The web browser request information from the Server Component located on the network and then renders the results for the user to process. The Server Components of the network return data based on specific client request back to the requesting client.  Server Components also inherit the attributes of a Client Component in that they are a device on the network and that they can also request information from other Server Components. However what differentiates a Client Component from a Server Component is that a Server Component response to requests from devices on the network. An example of a Server Component can be seen in a web server. A web server listens for new requests and then interprets the request, processes the web pages, and then returns the processed data back to the web browser client so that it may render the data for the user to interpret. The Database Component of the network returns unprocessed data from databases or other resources. This component also inherits attributes from the Server Component in that it is a device on a network, it can request information from other server components and database components, and it also listens for new requests so that it can return data when needed. The three-tier client/server model is very similar to the three-tier application architecture model, and in fact the layers can be mapped to one another. Three-Tier Application Architecture Model Presentation Layer/Logic Business Layer/Logic Data Layer/Logic The Presentation Layer including its underlying logic is very similar to the Client Component of the three-tiered model. The Presentation Layer focuses on interpreting the data returned by the Business Layer as well as presents the data back to the user.  Both the Presentation Layer and the Client Component focus primarily on the user and their experience. This allows for segments of the Business Layer to be distributable and interchangeable because the Presentation Layer is not directly integrated in with Business Layer. The Presentation Layer does not care where the data comes from as long as it is in the proper format. This allows for the Presentation Layer and Business Layer to be stored on one or more different servers so that it can provide a higher availability to clients requesting data. A good example of this is a web site that uses load balancing. When a web site decides to take on the task of load balancing they must obtain a network device that sits in front of a one or machines in order to distribute the request across multiple servers. When a user comes in through the load balanced device they are redirected to a specific server based on a few factors. Common Load Balancing Factors Current Server Availability Current Server Response Time Current Server Priority The Business Layer and corresponding logic are business rules applied to data prior to it being sent to the Presentation Layer. These rules are used to manipulate the data coming from the Data Access Layer, in addition to validating any data prior to being stored in the Data Access Layer. A good example of this would be when a user is trying to create multiple accounts under one email address. The Business Layer logic can prevent duplicate accounts by enforcing a unique email for every new account before the data is even stored in the Data Access Layer. The Server Component can be directly tied to this layer in that the server typically stores and process the Business Layer before it is returned to the end-user via the Presentation Layer. In addition the Server Component can also run automated process through the Business Layer on the data in the Data Access Layer so that additional business analysis can be derived from the data that has been already collected. The Data Layer and its logic are responsible for storing information so that it can be easily retrieved. Typical in most modern applications data is stored in a database management system however data can also be in the form of files stored on a file server. In addition a database can take on one of several forms. Common Database Formats XML File Pipe Delimited File Tab Delimited File Comma Delimited File (CSV) Plain Text File Microsoft Access Microsoft SQL Server MySql Oracle Sybase The Database component of the Networking model can be directly tied to the Data Layer because this is where the Data Layer obtains the data to return back the Business Layer. The Database Component basically allows for a place on the network to store data for future use. This enables applications to save data when they can and then quickly recall the saved data as needed so that the application does not have to worry about storing the data in memory. This prevents overhead that could be created when an application must retain all data in memory. As you can see the Three-Tier Client/Server Networking Model and the Three-Tiered Application Architecture Model rely very heavily on one another to function especially if different aspects of an application are distributed across an entire network. The use of various servers and database servers are wonderful when an application has a need to distribute work across the network. Network Components and Application Layers Interaction Database components will store all data needed for the Data Access Layer to manipulate and return to the Business Layer Server Component executes the Business Layer that manipulates data so that it can be returned to the Presentation Layer Client Component hosts the Presentation Layer that  interprets the data and present it to the user

    Read the article

  • Oracle GoldenGate 12c - Leading Enterprise Replication

    - by Doug Reid
    Oracle GoldenGate 12c released  on October 17th and includes several new cutting edge features that firmly establishes GoldenGate's leader position in the data replication space.   In fact, this release more than doubles the performance of data delivery, supports Oracle's new multitenant database feature,  it's more secure, has more options for high availability, and has made great strides to simplify the configuration and deployment of the product.     Read through the press release if you haven't already and do not miss the quote from Cern's Eva Dafonte Perez, regarding Oracle GoldenGate 12c "….performs five times faster compared to previous GoldenGate versions and simplifies the management of a multi-tier environment" There are a variety of new and improved features in the Oracle GoldenGate 12c.  Here are the highlights: Optimized for Oracle Database 12c -  GoldenGate 12c is custom tailored to the unique capabilities of Oracle database 12c and out of the box GoldenGate 12c supports multitenant (pluggable database (PDB)) and non-consolidated deployments of Oracle Database 12c.   The naming convention used by database 12c is now in three parts (PDB-name, schema-name, and object name).  We have made changes to the GoldenGate capture process to support the new naming convention and streamlined the whole process so a single GoldenGate capture process is being used at the container level rather than at each individual PDB.  By having the capture process at the container level resource usage and the number of processes are reduced. To view a conceptual architecture diagram click here. Integrated Delivery for the Oracle Database - Leveraging a lightweight streaming API built exclusively for Oracle GoldenGate 12c, this process distributes load, auto tunes the degree of parallelism, scales better, and delivers blinding rates of changed data delivery to the Oracle database.  One of the goals for Oracle GoldenGate 12c was to reduce IT costs by simplifying the configuration and reduce the time to manage complex infrastructures.  In previous versions of Oracle GoldenGate, customers would split transaction loads by grouping tables into multiple different delivery processes (click here to view the previous method). Each delivery process executed independently and without any interaction or knowledge of other delivery processes.  This setup was complicated to configure and time consuming as the developer needed in-depth knowledge of the source and target schemas and the transaction profile. With GoldenGate 12c and Integrated Delivery we have made it easier to configure and faster to deploy.  To view a conceptual architecture diagram of integrated delivery click here Coordinated Delivery for Non-Oracle Databases - Coordinated Delivery orchestrates high-speed apply processes and simplifies the configuration of GoldenGate for non-Oracle targets. In Oracle GoldenGate 12c a single delivery process is used with multiple threads (click here) and key events, such as primary key updates, event markers, DDL, etc, are coordinated between the various threads to insure that the transactions are applied in the same sequence as they were captured, all while delivery improved performance.  Replication Between On-Premises and Cloud-Based systems. - The trend for business to utilize both on-premises and cloud-based systems is rising and businesses need to replicate data back and forth.   GoldenGate 12c can be configured in a variety of ways to provide real-time replication when unrestricted or restricted (limited ports or HTTP tunneling) networks are between on-premises and cloud-based systems.    Expanded Heterogeneity - It wouldn't be a GoldenGate release without new and improved platform support.   Release 1 includes support for MySQL 5.6 and Sybase 15.7.   Upcoming in the next release GoldenGate, support will be expanded for MS SQL Server, DB2, and Teradata. Tighter Security - Oracle GoldenGate 12c is integrated with the Oracle wallet to shield usernames and passwords using strong encryption and aliases.   Customers accustomed to using the Oracle Wallet with other Oracle products will instantly be familiar with how to use this great new feature Expanded Oracle Application and Technology Support -   GoldenGate can be used along with Oracle Coherence to enable real-time changed data feeds to the Coherence cache using Toplink and the Oracle GoldenGate JMS adapter.     Plus,  Oracle Advanced Customer Services (ACS) now offers a low downtime E-Business Suite platform and database migrations using GoldenGate as the enabling technology.  Keep tuned for more blogs on the new features and the upcoming launch webcast where we will go into these new features in more detail.   In the mean time make sure to read through our white paper "Oracle GoldenGate 12c Release 1 New Features Overview"

    Read the article

  • Changing the Game: Why Oracle is in the IT Operations Management Business

    - by DanKoloski
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Next week, in Orlando, is the annual Gartner IT Operations Management Summit. Oracle is a premier sponsor of this annual event, which brings together IT executives for several days of high level talks about the state of operational management of enterprise IT. This year, Sushil Kumar, VP Product Strategy and Business Development for Oracle’s Systems & Applications Management, will be presenting on the transformation in IT Operations required to support enterprise cloud computing. IT Operations transformation is an important subject, because year after year, we hear essentially the same refrain – large enterprises spend an average of two-thirds (67%!) of their IT resources (budget, energy, time, people, etc.) on running the business, with far too little left over to spend on growing and transforming the business (which is what the business actually needs and wants). In the thirtieth year of the distributed computing revolution (give or take, depending on how you count it), it’s amazing that we have still not moved the needle on the single biggest component of enterprise IT resource utilization. Oracle is in the IT Operations Management business because when management is engineered together with the technology under management, the resulting efficiency gains can be truly staggering. To put it simply – what if you could turn that 67% of IT resources spent on running the business into 50%? Or 40%? Imagine what you could do with those resources. It’s now not just possible, but happening. This seems like a simple idea, but it is a radical change from “business as usual” in enterprise IT Operations. For the last thirty years, management has been a bolted-on afterthought – we pick and deploy our technology, then figure out how to manage it. This pervasive dysfunction is a broken cycle that guarantees high ongoing operating costs and low agility. If we want to break the cycle, we need to take a more tightly-coupled approach. As a complete applications-to-disk platform provider, Oracle is engineering management together with technology across our stack and hooking that on-premise management up live to My Oracle Support. Let’s examine the results with just one piece of the Oracle stack – the Oracle Database. Oracle began this journey with the Oracle Database 9i many years ago with the introduction of low-impact instrumentation in the database kernel (“tell me what’s wrong”) and through Database 10g, 11g and 11gR2 has successively added integrated advisory (“tell me how to fix what’s wrong”) and lifecycle management and automated self-tuning (“fix it for me, and do it on an ongoing basis for all my assets”). When enterprises take advantage of this tight-coupling, the results are game-changing. Consider the following (for a full list of public references, visit this link): British Telecom improved database provisioning time 1000% (from weeks to minutes) which allows them to provide a new DBaaS service to their internal customers with no additional resources Cerner Corporation Saved $9.5 million in CapEx and OpEx AND launched a brand-new cloud business at the same time Vodafone Group plc improved response times 50% and reduced maintenance planning times 50-60% while serving 391 million registered mobile customers Or the recent Database Manageability and Productivity Cost Comparisons: Oracle Database 11g Release 2 vs. SAP Sybase ASE 15.7, Microsoft SQL Server 2008 R2 and IBM DB2 9.7 as conducted by independent analyst firm ORC. In later entries, we’ll discuss similar results across other portions of the Oracle stack and how these efficiency gains are required to achieve the agility benefits of Enterprise Cloud. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Where are the function literals in c++?

    - by academicRobot
    First of all, maybe literals is not the right term for this concept, but its the closest I could think of (not literals in the sense of functions as first class citizens). The idea is that when you make a conventional function call, it compiles to something like this: callq <immediate address> But if you make a function call using a function pointer, it compiles to something like this: mov <memory location>,%rax callq *%rax Which is all well and good. However, what if I'm writing a template library that requires a callback of some sort with a specified argument list and the user of the library is expected to know what function they want to call at compile time? Then I would like to write my template to accept a function literal as a template parameter. So, similar to template <int int_literal> struct my_template {...};` I'd like to write template <func_literal_t func_literal> struct my_template {...}; and have calls to func_literal within my_template compile to callq <immediate address>. Is there a facility in C++ for this, or a work around to achieve the same effect? If not, why not (e.g. some cataclysmic side effects)? How about C++0x or another language? Solutions that are not portable are fine. Solutions that include the use of member function pointers would be ideal. I'm not particularly interested in being told "You are a <socially unacceptable term for a person of low IQ>, just use function pointers/functors." This is a curiosity based question, and it seems that it might be useful in some (albeit limited) applications. It seems like this should be possible since function names are just placeholders for a (relative) memory address, so why not allow more liberal use (e.g. aliasing) of this placeholder. p.s. I use function pointers and functions objects all the the time and they are great. But this post got me thinking about the don't pay for what you don't use principle in relation to function calls, and it seems like forcing the use of function pointers or similar facility when the function is known at compile time is a violation of this principle, though a small one.

    Read the article

  • Where are the function literals c++?

    - by academicRobot
    First of all, maybe literals is not the right term for this concept, but its the closest I could think of (not literals in the sense of functions as first class citizens). The idea is that when you make a conventional function call, it compiles to something like this: callq <immediate address> But if you make a function call using a function pointer, it compiles to something like this: mov <memory location>,%rax callq *%rax Which is all well and good. However, what if I'm writing a template library that requires a callback of some sort with a specified argument list and the user of the library is expected to know what function they want to call at compile time? Then I would like to write my template to accept a function literal as a template parameter. So, similar to template <int int_literal> struct my_template {...};` I'd like to write template <func_literal_t func_literal> struct my_template {...}; and have calls to func_literal within my_template compile to callq <immediate address>. Is there a facility in C++ for this, or a work around to achieve the same effect? If not, why not (e.g. some cataclysmic side effects)? How about C++0x or another language? Solutions that are not portable are fine. Solutions that include the use of member function pointers would be ideal. I'm not particularly interested in being told "You are a <socially unacceptable term for a person of low IQ>, just use function pointers/functors." This is a curiosity based question, and it seems that it might be useful in some (albeit limited) applications. It seems like this should be possible since function names are just placeholders for a (relative) memory address, so why not allow more liberal use (e.g. aliasing) of this placeholder. p.s. I use function pointers and functions objects all the the time and they are great. But this post got me thinking about the don't pay for what you don't use principle in relation to function calls, and it seems like forcing the use of function pointers or similar facility when the function is known at compile time is a violation of this principle, though a small one.

    Read the article

  • set proxy in apache for XMPP chat

    - by Hunt
    I want to setup a proxy settings in Apache to use Facebook XMPP Chat So far I have setup ejabber server and I am able to access xmpp service using http://mydomain.com:5280/xmpp-http-bind I am able to create Jabber Account too. Now as I want to integrate Facebook XMPP chat , I want my server to sit in between client and chat.facebook.com because I want to implement Facebook chat and custom chat too. So I have read this article and come to know that I need to serve BOSH Service as a proxy in apache to access Facebook Chat service. So I don't know how to set up a proxy in a apache httpd.conf as I have tried following <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /xmpp-httpbind http://www.mydomain.com:5280/xmpp-http-bind ProxyPassReverse /xmpp-httpbind http://www.mydomain.com:5280/xmpp-http-bind But whenever I request http://www.mydomain.com:5280/xmpp-http-bind from strophe.js I am getting following response from server <body type='terminate' condition='internal-server-error' xmlns='http://jabber.org/protocol/httpbind'> BOSH module not started </body> and server log says following E(<0.567.0:ejabberd_http_bind:1239) : You are trying to use BOSH (HTTP Bind) in host "chat.facebook.com", but the module mod_http_bind is not started in that host. Configure your BOSH client to connect to the correct host, or add your desired host to the configuration, or check your 'modules' section in your ejabberd configuration file. here is my existing settings of ejabberd.cfg , but still no luck {5280, ejabberd_http, [ {access,all}, {request_handlers, [ {["pub", "archive"], mod_http_fileserver}, {["xmpp-http-bind"], mod_http_bind} ]}, captcha, http_bind, http_poll, register, web_admin ]} ]}. in a module section {mod_http_bind, [{max_inactivity, 120}]}, and whenever i fire http://www.mydomain.com:5280/xmpp-http-bind url independently am getting following message ejabberd mod_http_bind An implementation of XMPP over BOSH (XEP-0206) This web page is only informative. To use HTTP-Bind you need a Jabber/XMPP client that supports it. I have added chat.facebook.com in a list of host in ejabber.cfg as follows {hosts, ["localhost","mydomain.com","chat.facebook.com"]} and now i am getting following response <body xmlns='http://jabber.org/protocol/httpbind' sid='710da2568460512eeb546545a65980c2704d9a27' wait='300' requests='2' inactivity='120' maxpause='120' polling='2' ver='1.8' from='chat.facebook.com' secure='true' authid='1917430584' xmlns:xmpp='urn:xmpp:xbosh' xmlns:stream='http://etherx.jabber.org/streams' xmpp:version='1.0'> <stream:features xmlns:stream='http://etherx.jabber.org/streams'> <mechanisms xmlns='urn:ietf:params:xml:ns:xmpp-sasl'> <mechanism>DIGEST-MD5</mechanism> <mechanism>PLAIN</mechanism> </mechanisms> <c xmlns='http://jabber.org/protocol/caps' hash='sha-1' node='http://www.process-one.net/en/ejabberd/' ver='yy7di5kE0syuCXOQTXNBTclpNTo='/> <register xmlns='http://jabber.org/features/iq-register'/> </stream:features> </body> if i use valid BOSH service created my jack moffit http://bosh.metajack.im:5280/xmpp-httpbind then i am getting following valid XML from facebook , but from my server i am not getting this <body xmlns='http://jabber.org/protocol/httpbind' inactivity='60' secure='true' authid='B8732AA1' content='text/xml; charset=utf-8' window='3' polling='15' sid='928073b02da55d34eb3c3464b4a40a37' requests='2' wait='300'> <stream:features xmlns:stream='http://etherx.jabber.org/streams' xmlns='jabber:client'> <mechanisms xmlns='urn:ietf:params:xml:ns:xmpp-sasl'> <mechanism>X-FACEBOOK-PLATFORM</mechanism> <mechanism>DIGEST-MD5</mechanism> </mechanisms> </stream:features> </body> Can anyone please help me to resolve the issue

    Read the article

  • Does ModSecurity 2.7.1 work with ASP.NET MVC 3?

    - by autonomatt
    I'm trying to get ModSecurity 2.7.1 to work with an ASP.NET MVC 3 website. The installation ran without errors and looking at the event log, ModSecurity is starting up successfully. I am using the modsecurity.conf-recommended file to set the basic rules. The problem I'm having is that whenever I am POSTing some form data, it doesn't get through to the controller action (or model binder). I have SecRuleEngine set to DetectionOnly. I have SecRequestBodyAccess set to On. With these settings, the body of the POST never reaches the controller action. If I set SecRequestBodyAccess to Off it works, so it's definitely something to do with how ModSecurity forwards the body data. The ModSecurity debug shows the following (looks to me as if all passed through): Second phase starting (dcfg 94b750). Input filter: Reading request body. Adding request argument (BODY): name "[0].IsSelected", value "on" Adding request argument (BODY): name "[0].Quantity", value "1" Adding request argument (BODY): name "[0].VariantSku", value "047861" Adding request argument (BODY): name "[1].Quantity", value "0" Adding request argument (BODY): name "[1].VariantSku", value "047862" Input filter: Completed receiving request body (length 115). Starting phase REQUEST_BODY. Recipe: Invoking rule 94c620; [file "*********************"] [line "54"] [id "200001"]. Rule 94c620: SecRule "REQBODY_ERROR" "!@eq 0" "phase:2,auditlog,id:200001,t:none,log,deny,status:400,msg:'Failed to parse request body.',logdata:%{reqbody_error_msg},severity:2" Transformation completed in 0 usec. Executing operator "!eq" with param "0" against REQBODY_ERROR. Operator completed in 0 usec. Rule returned 0. Recipe: Invoking rule 5549c38; [file "*********************"] [line "75"] [id "200002"]. Rule 5549c38: SecRule "MULTIPART_STRICT_ERROR" "!@eq 0" "phase:2,auditlog,id:200002,t:none,log,deny,status:44,msg:'Multipart request body failed strict validation: PE %{REQBODY_PROCESSOR_ERROR}, BQ %{MULTIPART_BOUNDARY_QUOTED}, BW %{MULTIPART_BOUNDARY_WHITESPACE}, DB %{MULTIPART_DATA_BEFORE}, DA %{MULTIPART_DATA_AFTER}, HF %{MULTIPART_HEADER_FOLDING}, LF %{MULTIPART_LF_LINE}, SM %{MULTIPART_MISSING_SEMICOLON}, IQ %{MULTIPART_INVALID_QUOTING}, IP %{MULTIPART_INVALID_PART}, IH %{MULTIPART_INVALID_HEADER_FOLDING}, FL %{MULTIPART_FILE_LIMIT_EXCEEDED}'" Transformation completed in 0 usec. Executing operator "!eq" with param "0" against MULTIPART_STRICT_ERROR. Operator completed in 0 usec. Rule returned 0. Recipe: Invoking rule 554bd70; [file "********************"] [line "80"] [id "200003"]. Rule 554bd70: SecRule "MULTIPART_UNMATCHED_BOUNDARY" "!@eq 0" "phase:2,auditlog,id:200003,t:none,log,deny,status:44,msg:'Multipart parser detected a possible unmatched boundary.'" Transformation completed in 0 usec. Executing operator "!eq" with param "0" against MULTIPART_UNMATCHED_BOUNDARY. Operator completed in 0 usec. Rule returned 0. Recipe: Invoking rule 554cbe0; [file "*********************************"] [line "94"] [id "200004"]. Rule 554cbe0: SecRule "TX:/^MSC_/" "!@streq 0" "phase:2,log,auditlog,id:200004,t:none,deny,msg:'ModSecurity internal error flagged: %{MATCHED_VAR_NAME}'" Rule returned 0. Hook insert_filter: Adding input forwarding filter (r 5541fc0). Hook insert_filter: Adding output filter (r 5541fc0). Initialising logging. Starting phase LOGGING. Recording persistent data took 0 microseconds. Audit log: Ignoring a non-relevant request. I can't see anything unusual in Fiddler. I'm using a ViewModel in the parameters of my action. No data is bound if SecRequestBodyAccess is set to On. I'm even logging all the Request.Form.Keys and values via log4net, but not getting any values there either. I'm starting to wonder if ModSecurity actually works with ASP.NET MVC or if there is some conflict with the ModSecurity http Module and the model binder kicking in. Does anyone have any suggestions or can anyone confirm they have ModSecurity working with an ASP.NET MVC website?

    Read the article

  • Conversation as User Assistance

    - by ultan o'broin
    Applications User Experience members (Erika Web, Laurie Pattison, and I) attended the User Assistance Europe Conference in Stockholm, Sweden. We were impressed with the thought leadership and practical application of ideas in Anne Gentle's keynote address "Social Web Strategies for Documentation". After the conference, we spoke with Anne to explore the ideas further. Anne Gentle (left) with Applications User Experience Senior Director Laurie Pattison In Anne's book called Conversation and Community: The Social Web for Documentation, she explains how user assistance is undergoing a seismic shift. The direction is away from the old print manuals and online help concept towards a web-based, user community-driven solution using social media tools. User experience professionals now have a vast range of such tools to start and nurture this "conversation": blogs, wikis, forums, social networking sites, microblogging systems, image and video sharing sites, virtual worlds, podcasts, instant messaging, mashups, and so on. That user communities are a rich source of user assistance is not a surprise, but the extent of available assistance is. For example, we know from the Consortium for Service Innovation that there has been an 'explosion' of user-generated content on the web. User-initiated community conversations provide as much as 30 times the number of official help desk solutions for consortium members! The growing reliance on user community solutions is clearly a user experience issue. Anne says that user assistance as conversation "means getting closer to users and helping them perform well. User-centered design has been touted as one of the most important ideas developed in the last 20 years of workplace writing. Now writers can take the idea of user-centered design a step further by starting conversations with users and enabling user assistance in interactions." Some of Anne's favorite examples of this paradigm shift from the world of traditional documentation to community conversation include: Writer Bob Bringhurst's blog about Adobe InDesign and InCopy products and Adobe's community help The Microsoft Development Network Community Center ·The former Sun (now Oracle) OpenDS wiki, NetBeans Ruby and other community approaches to engage diverse audiences using screencasts, wikis, and blogs. Cisco's customer support wiki, EMC's community, as well as Symantec and Intuit's approaches The efforts of Ubuntu, Mozilla, and the FLOSS community generally Adobe Writer Bob Bringhurst's Blog Oracle is not without a user community conversation too. Besides the community discussions and blogs around documentation offerings, we have the My Oracle Support Community forums, Oracle Technology Network (OTN) communities, wiki, blogs, and so on. We have the great work done by our user groups and customer councils. Employees like David Haimes reach out, and enthusiastic non-employee gurus like Chet Justice (OracleNerd), Floyd Teter and Eddie Awad provide great "how-to" information too. But what does this paradigm shift mean for existing technical writers as users turn away from the traditional printable PDF manual deliverables? We asked Anne after the conference. The writer role becomes one of conversation initiator or enabler. The role evolves, along with the process, as the users define their concept of user assistance and terms of engagement with the product instead of having it pre-determined. It is largely a case now of "inventing the job while you're doing it, instead of being hired for it" Anne said. There is less emphasis on formal titles. Anne mentions that her own title "Content Stacker" at OpenStack; others use titles such as "Content Curator" or "Community Lead". However, the role remains one essentially about communications, "but of a new type--interacting with users, moderating, curating content, instead of sitting down to write a manual from start to finish." Clearly then, this role is open to more than professional technical writers. Product managers who write blogs, developers who moderate forums, support professionals who update wikis, rock star programmers with a penchant for YouTube are ideal. Anyone with the product knowledge, empathy for the user, and flair for relationships on the social web can join in. Some even perform these roles already but do not realize it. Anne feels the technical communicator space will move from hiring new community conversation professionals (who are already active in the space through blogging, tweets, wikis, and so on) to retraining some existing writers over time. Our own research reveals that the established proponents of community user assistance even set employee performance objectives for internal content curators about the amount of community content delivered by people outside the organization! To take advantage of the conversations on the web as user assistance, enterprises must first establish where on the spectrum their community lies. "What is the line between community willingness to contribute and the enterprise objectives?" Anne asked. "The relationship with users must be managed and also measured." Anne believes that the process can start with a "just do it" approach. Begin by reaching out to existing user groups, individual bloggers and tweeters, forum posters, early adopter program participants, conference attendees, customer advisory board members, and so on. Use analytical tools to measure the level of conversation about your products and services to show a return on investment (ROI), winning management support. Anne emphasized that success with the community model is dependent on lowering the technical and motivational barriers so that users can readily contribute to the conversation. Simple tools must be provided, and guidelines, if any, must be straightforward but not mandatory. The conversational approach is one where traditional style and branding guides do not necessarily apply. Tools and infrastructure help users to create content easily, to search and find the information online, read it, rate it, translate it, and participate further in the content's evolution. Recognizing contributors by using ratings on forums, giving out Twitter kudos, conference invitations, visits to headquarters, free products, preview releases, and so on, also encourages the adoption of the conversation model. The move to conversation as user assistance is not free, but there is a business ROI. The conversational model means that customer service is enhanced, as user experience moves from a functional to a valued, emotional level. Studies show a positive correlation between loyalty and financial performance (Consortium for Service Innovation, 2010), and as customer experience and loyalty become key differentiators, user experience professionals cannot explore the model's possibilities. The digital universe (measured at 1.2 million petabytes in 2010) is doubling every 12 to 18 months, and 70 percent of that universe consists of user-generated content (IDC, 2010). Conversation as user assistance cannot be ignored but must be embraced. It is a time to manage for abundance, not scarcity. Besides, the conversation approach certainly sounds more interesting, rewarding, and fun than the traditional model! I would like to thank Anne for her time and thoughts, and recommend that all user assistance professionals read her book. You can follow Anne on Twitter at: http://www.twitter.com/annegentle. Oracle's Acrolinx IQ deployment was used to author this article.

    Read the article

  • C# in Depth, Third Edition by Jon Skeet, Manning Publications Co. Book Review

    - by Compudicted
    Originally posted on: http://geekswithblogs.net/Compudicted/archive/2013/10/24/c-in-depth-third-edition-by-jon-skeet-manning-publications.aspx I started reading this ebook on September 28, 2013, the same day it was sent my way by Manning Publications Co. for review while it still being fresh off the press. So 1st thing – thanks to Manning for this opportunity and a free copy of this must have on every C# developer’s desk book! Several hours ago I finished reading this book (well, except a for a large portion of its quite lengthy appendix). I jumped writing this review right away while still being full of emotions and impressions from reading it thoroughly and running code examples. Before I go any further I would like say that I used to program on various platforms using various languages starting with the Mainframe and ending on Windows, and I gradually shifted toward dealing with databases more than anything, however it happened with me to program in C# 1 a lot when it was first released and then some C# 2 with a big leap in between to C# 5. So my perception and experience reading this book may differ from yours. Also what I want to tell is somewhat funny that back then, knowing some Java and seeing C# 1 released, initially made me drawing a parallel that it is a copycat language, how wrong was I… Interestingly, Jon programs in Java full time, but how little it was mentioned in the book! So more on the book: Be informed, this is not a typical “Recipes”, “Cookbook” or any set of ready solutions, it is rather targeting mature, advanced developers who do not only know how to use a number of features, but are willing to understand how the language is operating “under the hood”. I must state immediately, at the same time I am glad the author did not go into the murky depths of the MSIL, so this is a very welcome decision on covering a modern language as C# for me, thank you Jon! Frankly, not all was that rosy regarding the tone and structure of the book, especially the the first half or so filled me with several negative and positive emotions overpowering each other. To expand more on that, some statements in the book appeared to be bias to me, or filled with pre-justice, it started to look like it had some PR-sole in it, but thankfully this was all gone toward the end of the 1st third of the book. Specifically, the mention on the C# language popularity, Java is the #1 language as per https://sites.google.com/site/pydatalog/pypl/PyPL-PopularitY-of-Programming-Language (many other sources put C at the top which I highly doubt), also many interesting functional languages as Clojure and Groovy appeared and gained huge traction which run on top of Java/JVM whereas C# does not enjoy such a situation. If we want to discuss the popularity in general and say how fast a developer can find a new job that pays well it would be indeed the very Java, C++ or PHP, never C#. Or that phrase on language preference as a personal issue? We choose where to work or we are chosen because of a technology used at a given software shop, not vice versa. The book though it technically very accurate with valid code, concise examples, but I wish the author would give more concrete, real-life examples on where each feature should be used, not how. Another point to realize before you get the book is that it is almost a live book which started to be written when even C# 3 wasn’t around so a lot of ground is covered (nearly half of the book) on the pre-C# 3 feature releases so if you already have a solid background in the previous releases and do not plan to upgrade, perhaps half of the book can be skipped, otherwise this book is surely highly recommended. Alas, for me it was a hard read, most of it. It was not boring (well, only may be two times), it was just hard to grasp some concepts, but do not get me wrong, it did made me pause, on several occasions, and made me read and re-read a page or two. At times I even wondered if I have any IQ at all (LOL). Be prepared to read A LOT on generics, not that they are widely used in the field (I happen to work as a consultant and went thru a lot of code at many places) I can tell my impression is the developers today in best case program using examples found at OpenStack.com. Also unlike the Java world where having the most recent version is nearly mandated by the OSS most companies on the Microsoft platform almost never tempted to upgrade the .Net version very soon and very often. As a side note, I was glad to see code recently that included a nullable variable (myvariable? notation) and this made me smile, besides, I recommended that person this book to expand her knowledge. The good things about this book is that Jon maintains an active forum, prepared code snippets and even a small program (Snippy) that is happy to run the sample code saving you from writing any plumbing code. A tad now on the C# language itself – it sure enjoyed a wonderful road toward perfection and a very high adoption, especially for ASP development. But to me all the recent features that made this statically typed language more dynamic look strange. Don’t we have F#? Which supposed to be the dynamic language? Why do we need to have a hybrid language? Now the developers live their lives in dualism of the static and dynamic variables! And LINQ to SQL, it is covered in depth, but wasn’t it supposed to be dropped? Also it seems that very little is being added, and at a slower pace, e.g. Roslyn will come in late 2014 perhaps, and will be probably the only main feature. Again, it is quite hard to read this book as various chapters, C# versions mentioned every so often only if I only could remember what was covered exactly where! So the fact it has so many jumps/links back and forth I recommend the ebook format to make the navigations easier to perform and I do recommend using software that allows bookmarking, also make sure you have access to plenty of coffee and pizza (hey, you probably know this joke – who a programmer is) ! In terms of closing, if you stuck at C# 1 or 2 level, it is time to embrace the power of C# 5! Finally, to compliment Manning, this book unlike from any other publisher so far, was the only one as well readable (put it formatted) on my tablet as in Adobe Reader on a laptop.

    Read the article

  • Community Conversation

    - by ultan o'broin
    Applications User Experience members (Erika Webb, Laurie Pattison, and I) attended the User Assistance Europe Conference in Stockholm, Sweden. We were impressed with the thought leadership and practical application of ideas in Anne Gentle's keynote address "Social Web Strategies for Documentation". After the conference, we spoke with Anne to explore the ideas further. Applications User Experience Senior Director Laurie Pattison (left) with Anne Gentle at the User Assistance Europe Conference In Anne's book called Conversation and Community: The Social Web for Documentation, she explains how user assistance is undergoing a seismic shift. The direction is away from the old print manuals and online help concept towards a web-based, user community-driven solution using social media tools. User experience professionals now have a vast range of such tools to start and nurture this "conversation": blogs, wikis, forums, social networking sites, microblogging systems, image and video sharing sites, virtual worlds, podcasts, instant messaging, mashups, and so on. That user communities are a rich source of user assistance is not a surprise, but the extent of available assistance is. For example, we know from the Consortium for Service Innovation that there has been an 'explosion' of user-generated content on the web. User-initiated community conversations provide as much as 30 times the number of official help desk solutions for consortium members! The growing reliance on user community solutions is clearly a user experience issue. Anne says that user assistance as conversation "means getting closer to users and helping them perform well. User-centered design has been touted as one of the most important ideas developed in the last 20 years of workplace writing. Now writers can take the idea of user-centered design a step further by starting conversations with users and enabling user assistance in interactions." Some of Anne's favorite examples of this paradigm shift from the world of traditional documentation to community conversation include: * Writer Bob Bringhurst's blog about Adobe InDesign and InCopy products and Adobe's community help * The Microsoft Development Network Community Center * ·The former Sun (now Oracle) OpenDS wiki, NetBeans Ruby and other community approaches to engage diverse audiences using screencasts, wikis, and blogs. * Cisco's customer support wiki, EMC's community, as well as Symantec and Intuit's approaches * The efforts of Ubuntu, Mozilla, and the FLOSS community generally Adobe Writer Bob Bringhurst's Blog Oracle is not without a user community conversation too. Besides the community discussions and blogs around documentation offerings, we have the My Oracle Support Community forums, Oracle Technology Network (OTN) communities, wiki, blogs, and so on. We have the great work done by our user groups and customer councils. Employees like David Haimes are reaching out, and enthusiastic non-employee gurus like Chet Justice (OracleNerd), Floyd Teter and Eddie Awad provide great "how-to" information too. But what does this paradigm shift mean for existing technical writers as users turn away from the traditional printable PDF manual deliverables? We asked Anne after the conference. The writer role becomes one of conversation initiator or enabler. The role evolves, along with the process, as the users define their concept of user assistance and terms of engagement with the product instead of having it pre-determined. It is largely a case now of "inventing the job while you're doing it, instead of being hired for it" Anne said. There is less emphasis on formal titles. Anne mentions that her own title "Content Stacker" at OpenStack; others use titles such as "Content Curator" or "Community Lead". However, the role remains one essentially about communications, "but of a new type--interacting with users, moderating, curating content, instead of sitting down to write a manual from start to finish." Clearly then, this role is open to more than professional technical writers. Product managers who write blogs, developers who moderate forums, support professionals who update wikis, rock star programmers with a penchant for YouTube are ideal. Anyone with the product knowledge, empathy for the user, and flair for relationships on the social web can join in. Some even perform these roles already but do not realize it. Anne feels the technical communicator space will move from hiring new community conversation professionals (who are already active in the space through blogging, tweets, wikis, and so on) to retraining some existing writers over time. Our own research reveals that the established proponents of community user assistance even set employee performance objectives for internal content curators about the amount of community content delivered by people outside the organization! To take advantage of the conversations on the web as user assistance, enterprises must first establish where on the spectrum their community lies. "What is the line between community willingness to contribute and the enterprise objectives?" Anne asked. "The relationship with users must be managed and also measured." Anne believes that the process can start with a "just do it" approach. Begin by reaching out to existing user groups, individual bloggers and tweeters, forum posters, early adopter program participants, conference attendees, customer advisory board members, and so on. Use analytical tools to measure the level of conversation about your products and services to show a return on investment (ROI), winning management support. Anne emphasized that success with the community model is dependent on lowering the technical and motivational barriers so that users can readily contribute to the conversation. Simple tools must be provided, and guidelines, if any, must be straightforward but not mandatory. The conversational approach is one where traditional style and branding guides do not necessarily apply. Tools and infrastructure help users to create content easily, to search and find the information online, read it, rate it, translate it, and participate further in the content's evolution. Recognizing contributors by using ratings on forums, giving out Twitter kudos, conference invitations, visits to headquarters, free products, preview releases, and so on, also encourages the adoption of the conversation model. The move to conversation as user assistance is not free, but there is a business ROI. The conversational model means that customer service is enhanced, as user experience moves from a functional to a valued, emotional level. Studies show a positive correlation between loyalty and financial performance (Consortium for Service Innovation, 2010), and as customer experience and loyalty become key differentiators, user experience professionals cannot explore the model's possibilities. The digital universe (measured at 1.2 million petabytes in 2010) is doubling every 12 to 18 months, and 70 percent of that universe consists of user-generated content (IDC, 2010). Conversation as user assistance cannot be ignored but must be embraced. It is a time to manage for abundance, not scarcity. Besides, the conversation approach certainly sounds more interesting, rewarding, and fun than the traditional model! I would like to thank Anne for her time and thoughts, and recommend that all user assistance professionals read her book. You can follow Anne on Twitter at: http://www.twitter.com/annegentle. Oracle's Acrolinx IQ deployment was used to author this article.

    Read the article

  • CodePlex Daily Summary for Sunday, June 06, 2010

    CodePlex Daily Summary for Sunday, June 06, 2010New ProjectsActive Worlds Dot Net Wrapper (Based on AwSdk): Active Worlds Dot Net Wrapper (Based on AwSdk)Combina: Smart calculator for large combinatorial calculations.Concurrent Cache: ConcurrentCache is a smart output cache library extending OutputCacheProvider. It consists of in memory, cache files and compressed files modes and...Decay: Personal use. For learningFazTalk: FazTalk is a suite of tools and products that are designed to improve collaboration and workflow interactions. FazTalk takes an innovative approach...grouped: A peer to peer text editor, written in C# [update] I wrote this little thing a while back and even forgot about it, I stopped coding for more tha...HitchARide MVC 2 Sample: An MVC 2 sample written as part of the Microsoft 2010 London Web Camp based on the wireframes at http://schematics.earthware.co.uk/hitcharide. Not...Inspiration.Web: Description: A simple (but entertaining) ASP.NET MVC (C#) project to suggest random code names for projects. Intended audience: People who ne...NetFileBrowser - TinyMCE: tinyMCE file plugin with asp.netOil Slick Live Feeds: All live feeds from BP's Remotely Operated VehiclesParticle Lexer: Parser and Tokenizer libraryPdf Form Tool: Pdf Form Tool demonstrates how the iTextSharp library could be used to fill PDF forms. The input data is provided as a csv file. The application ...Planning Poker Windows Mobile 7: This project is a Planning Poker application for Windows Mobile 7 (and later?). RandomRat: RandomRat is a program for generating random sets that meet specific criteriaScience.NET: A scientific library written in managed code. It supports advanced mathematics (algebra system, sequences, statistics, combinatorics...), data stru...Spider Compiler: Spider Compiler parses the input of a spider programming source file and compiles it (with help of csc.exe; the C#-Compiler) to an exe-file. This p...Sununpro: sunun's project for study by team foundation server.TFS Buddy: An application that manipulates your I-Buddy whenever something happens in your Team Foundation ServerValveSoft: ValveSysWiiMote Physics: WiiMote Physics is an application that allows you to retrieve data from your WiiMote or Balance Board and display it in real-time. It has a number...WinGet: WinGet is a download manager for Windows. You can drag links onto the WinGet Widget and it will download a file on the selected folder. It is dev...XProject.NET: A project management and team collaboration platformNew Releases.NET DiscUtils: Version 0.9 Preview: This release is still under development. New features available in this release: Support for accessing short file names stored in WIM files Incr...Active Worlds Dot Net Wrapper (Based on AwSdk): Active World Dot Net Wrapper (0.0.1.85): Based on AwSdk 85AwSdk UnOfficial Wrapper Howto Use: C# using AwWrapper; VB.Net Import AwWrapperAjaxControlToolkit additional extenders: ZhecheAjaxControls for .NET3.5: Used AJAX Control Toolkit Release Notes - April 12th 2010 Release Version 40412. Fixed deadlock in long operation canceling Some other fixesAnyCAD: AnyCAD.v1.2.ENU.Install: http://www.anycad.net Parametric Modeling *3D: Sphere, Box, Cylinder, Cone •2D: Line, Rectangle, Arc, Arch, Circle, Spline, Polygon •Feature: Extr...Community Forums NNTP bridge: Community Forums NNTP Bridge V29: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...Concurrent Cache: 1.0: This is the first release for the ConcurrentCache library.Configuration Section Designer: 2.0.0: This is the first Beta Release for VS 2010 supportDoxygen Browser Addin for VS: Doxygen Browser Addin - v0.1.4 Beta: Support for Visual Studio 2010 improved the logging of errors (Event Logs) Fixed some issues/bugs Hot key for navigation "Control + F1, Contr...Folder Bookmarks: Folder Bookmarks 1.6.2: The latest version of Folder Bookmarks (1.6.2), with new UI changes. Once you have extracted the file, do not delete any files/folders. They are n...HERB.IQ: Beta 0.1 Source code release 5: Beta 0.1 Source code release 5Inspiration.Web: Initial release (deployment package): Initial release (deployment package)NetFileBrowser - TinyMCE: Demo Project: Demo ProjectNetFileBrowser - TinyMCE: NetFileBrowser: NetImageBrowserNLog - Advanced .NET Logging: Nightly Build 2010.06.05.001: Changes since the last build:2010-06-04 23:29:42 Jarek Kowalski Massive update to documentation generator. 2010-05-28 15:41:42 Jarek Kowalski upda...Oil Slick Live Feeds: Oil Slick Live Feeds 0.1: A the first release, with feeds from the MS Skandi, Boa Deep C, Enterprise and Q4000. They are live streams from the ROV's monitoring the damaged...Pcap.Net: Pcap.Net 0.7.0 (46671): Pcap.Net - June 2010 Release Pcap.Net is a .NET wrapper for WinPcap written in C++/CLI and C#. It Features almost all WinPcap features and includes...sqwarea: Sqwarea 0.0.289.0 (alpha): API supportTFS Buddy: TFS Buddy First release (Beta 1): This is the first release of the TFS Buddy.Visual Studio DSite: Looping Animation (Visual C++ 2008): A solider firing a bullet that loops and displays an explosion everytime it hits the edge of the form.WiiMote Physics: WiiMote Physics v4.0: v4.0.0.1 Recovered from existing compiled assembly after hard drive failure Now requires .NET 4.0 (it seems to make it run faster) Added new c...WinGet: Alpha 1: First Alpha of WinGet. It includes all the planned features but it contains many bugs. Packaged using 7-Zip and ClickOnce.Most Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)PHPExcelpatterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesASP.NETMost Active ProjectsCommunity Forums NNTP bridgeRawrpatterns & practices – Enterprise LibraryGMap.NET - Great Maps for Windows Forms & PresentationN2 CMSIonics Isapi Rewrite FilterStyleCopsmark C# LibraryFarseer Physics Enginepatterns & practices: Composite WPF and Silverlight

    Read the article

  • People, Process & Engagement: WebCenter Partner Keste

    - by Michael Snow
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Within the WebCenter group here at Oracle, discussions about people, process and engagement cross over many vertical industries and products. Amidst our growing partner ecosystem, the community provides us insight into great customer use cases every day. Such is the case with our partner, Keste, who provides us a guest post on our blog today with an overview of their innovative solution for a customer in the transportation industry. Keste is an Oracle software solutions and development company headquartered in Dallas, Texas. As a Platinum member of the Oracle® PartnerNetwork, Keste designs, develops and deploys custom solutions that automate complex business processes. Seamless Customer Self-Service Experience in the Trucking Industry with Oracle WebCenter Portal  Keste, Oracle Platinum Partner Customer Overview Omnitracs, Inc., a Qualcomm company provides mobility solutions for trucking fleets to companies in the transportation industry. Omnitracs’ mobility services include basic communications such as text as well as advanced monitoring services such as GPS tracking, temperature tracking of perishable goods, load tracking and weighting distribution, and many others. Customer Business Needs Already the leading provider of mobility solutions for large trucking fleets, they chose to target smaller trucking fleets as new customers. However their existing high-touch customer support method would not be a cost effective or scalable method to manage and service these smaller customers. Omnitracs needed to provide several self-service features to make customer support more scalable while keeping customer satisfaction levels high and the costs manageable. The solution also had to be very intuitive and easy to use. The systems that Omnitracs sells to these trucking customers require professional installation and smaller customers need to track and schedule the installation. Information captured in Oracle eBusiness Suite needed to be readily available for new customers to track these purchases and delivery details. Omnitracs wanted a high impact User Interface to significantly improve customer experience with the ability to integrate with EBS, provisioning systems as well as CRM systems that were already implemented. Omnitracs also wanted to build an architecture platform that could potentially be extended to other Portals. Omnitracs’ stated goal was to deliver an “eBay-like” or “Amazon-like” experience for all of their customers so that they could reach a much broader market beyond their large company customer base. Solution Overview In order to manage the increased complexity, the growing support needs of global customers and improve overall product time-to-market in a cost-effective manner, IT began to deliver a self-service model. This self service model not only transformed numerous business processes but is also allowing the business to keep up with the growing demands of the (internal and external) customers. This solution was a customer service Portal that provided self service capabilities for large and small customers alike for Activation of mobility products, managing add-on applications for the devices (much like the Apple App Store), transferring services when trucks are sold to other companies as well as deactivation all without the involvement of a call service agent or sending multiple emails to different Omnitracs contacts. This is a conceptual view of the Customer Portal showing the details of the components that make up the solution. 12.00 The portal application for transactions was entirely built using ADF 11g R2. Omnitracs’ business had a pressing requirement to have a portal available 24/7 for its customers. Since there were interactions with EBS in the back-end, the downtimes on the EBS would negate this availability. Omnitracs devised a decoupling strategy at the database side for the EBS data. The decoupling of the database was done using Oracle Data Guard and completely insulated the solution from any eBusiness Suite down time. The customer has no knowledge whether eBS is running or not. Here are two sample screenshots of the portal application built in Oracle ADF. Customer Benefits The Customer Portal not only provided the scalability to grow the business but also provided the seamless integration with other disparate applications. Some of the key benefits are: Improved Customer Experience: With a modern look and feel and a Portal that has the aspects of an App Store, the customer experience was significantly improved. Page response times went from several seconds to sub-second for all of the pages. Enabled new product launches: After successfully dominating the large fleet market, Omnitracs now has a scalable solution to sell and manage smaller fleet customers giving them a huge advantage over their nearest competitors. Dozens of new customers have been acquired via this portal through an onboarding process that now takes minutes Seamless Integrations Improves Customer Support: ADF 11gR2 allowed Omnitracs to bring a diverse list of applications into one integrated solution. This provided a seamless experience for customers to route them from Marketing focused application to a customer-oriented portal. Internally, it also allowed Sales Representatives to have an integrated flow for taking a prospect through the various steps to onboard them as a customer. Key integrations included: Unity Core Salesforce.com Merchant e-Solution for credit card Custom Omnitracs Applications like CUPS and AUTO Security utilizing OID and OVD Back end integration with EBS (Data Guard) and iQ Database Business Impact Significant business impacts were realized through the launch of customer portal. It not only allows the business to push through in underserved segments, but also reduces the time it needs to spend on customer support—allowing the business to focus more on sales and identifying the market for new products. Some of the Immediate Benefits are The entire onboarding process is now completely automated and now completes in minutes. This represents an 85% productivity improvement over their previous processes. And it was 160 times faster! With the success of this self-service solution, the business is now targeting about 3X customer growth in the next five years. This represents a tripling of their overall customer base and significant downstream revenue for the ongoing services. 90%+ improvement of customer onboarding and management process by utilizing, single sign on integration using OID/OAM solution, performance improvements and new self-service functionality Unified login for all Customers, Partners and Internal Users enables login to a common portal and seamless access to all other integrated applications targeted at the respective audience Significantly improved customer experience with a better look and feel with a more user experience focused Portal screens. Helped sales of the new product by having an easy way of ordering and activating the product. Data Guard helped increase availability of the Portal to 99%+ and make it independent of EBS downtime. This gave customers the feel of high availability of the portal application. Some of the anticipated longer term Benefits are: Platform that can be leveraged to launch any new product introduction and enable all product teams to reach new customers and new markets Easy integration with content management to allow business owners more control of the product catalog Overall reduced TCO with standardization of the Oracle platform Managed IT support cost savings through optimization of technology skills needed to support and modify this solution ------------------------------------------------------------ 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 -"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Times New Roman","serif";}

    Read the article

  • Is it worth moving from stored procedures to linq ?

    - by Josef
    I'm looking at standardizing programming in an organisaiton. Half uses stored procedures and the other half Linq. From what i've read there is still some debate going on on this topic. My concern is that MS is trying to slip in it's own proprietry query language 'linq' to make SQL redundant. If a few years back microsoft had tried to win customers from oracle and sybase with their MSSQL database and stated that it didn't use SQL by their own proprietry query langues ie linq. I doubt many would have switched. I believe that is exactly what is happening now by introducting it into the applicaiton business layer. I have used MS for many years but there is one gripe that I have with them and that is that they change their direction a lot. By a lot I mean new releases of .net, silverlight etc are more than 30% different from previous version. So by the time you become productive a new release is on the way. As things stand now a web developer using .net would need to know either vb.net or c#, xml, xaml,javascript,html, sql and now linq. That doesn't make for good productivity in my books. My concern is that once we all start using linq MS will start changing it between releases. and it will become an ever changing landscape. I believe that 'linq to sql' has already been deprecated. At leas with SQL we are dealing with a more stable and standardized language. Are we looking at a programming revolution or a marketing campaign? As far as I know other languages like Cobol have stayed the same for years. A cobol program from 20 years ago could pick up todays code and start working on it. Could a Vb3 person work on a modern .net web app ? Would these large changes need to be made if the underlying original foundation had been sound ? I worry about following MS shaking roadmap with it's deadends and double backs. are there any architects out there who feel the same ? regards Josef

    Read the article

  • Top 10 Reasons SQL Developer is Perfect for Oracle Beginners

    - by thatjeffsmith
    Learning new technologies can be daunting. If you’ve never used a Mac before, you’ll probably be a bit baffled at first. But, you’re probably at least coming from a desktop computing background (Windows), so you common frame of reference. But what if you’re just now learning to use a relational database? Yes, you’ve played with Access a bit, but now your employer or college instructor has charged you with becoming proficient with Oracle database. Here’s 10 reasons why I think Oracle SQL Developer is the perfect vehicle to help get you started. 1. It’s free No need to break into one of these… No start-up costs, no need to wrangle budget dollars from your company. Students don’t have any money after books and lab fees anyway. And most employees don’t like having to ask for ‘special’ software anyway. So avoid all of that and make sure the free stuff doesn’t suit your needs first. Upgrades are available on a regular base, also at no cost, and support is freely available via our public forums. 2. It will run pretty much anywhere Windows – check. OSX (Apple) – check. Unix – check. Linux – check. No need to start up a windows VM to run your Windows-only software in your lab machine. 3. Anyone can install it There’s no installer, no registry to be updated, no admin privs to be obtained. If you can download and extract files to your machine or USB storage device, you can run it. You can be up and running with SQL Developer in under 5 minutes. Here’s a video tutorial to see how to get started. 4. It’s ubiquitous I admit it, I learned a new word yesterday and I wanted an excuse to use it. SQL Developer’s everywhere. It’s had over 2,500,000 downloads in the past year, and is the one of the most downloaded items from OTN. This means if you need help, there’s someone sitting nearby you that can assist, and since they’re in the same tool as you, they’ll be speaking the same language. 5. Simple User Interface Up-up-down-down-Left-right-left-right-A-B-A-B-START will get you 30 lives, but you already knew that, right? You connect, you see your objects, you click on your objects. Or, you can use the worksheet to write your queries and programs in. There’s only one toolbar, and just a few buttons. If you’re like me, video games became less fun when each button had 6 action items mapped to it. I just want the good ole ‘A’, ‘B’, ‘SELECT’, and ‘START’ controls. If you’re new to Oracle, you shouldn’t have the double-workload of learning a new complicated tool as well. 6. It’s not a ‘black box’ Click through your objects, but also get the SQL that drives the GUI As you use the wizards to accomplish tasks for you, you can view the SQL statement being generated on your behalf. Just because you have a GUI, doesn’t mean you’re ceding your responsibility to learn the underlying code that makes the database work. 7. It’s four tools in one It’s not just a query tool. Maybe you need to design a data model first? Or maybe you need to migrate your Sybase ASE database to Oracle for a new project? Or maybe you need to create some reports? SQL Developer does all of that. So once you get comfortable with one part of the tool, the others will be much easier to pick up as your needs change. 8. Great learning resources available Videos, blogs, hands-on learning labs – you name it, we got it. Why wait for someone to train you, when you can train yourself at your own pace? 9. You can use it to teach yourself SQL Instead of being faced with the white-screen-of-panic, you can visually build your queries by dragging and dropping tables and views into the Query Builder. Yes, ‘just like Access’ – only better. And as you build your query, toggle to the Worksheet panel and see the SQL statement. Again, SQL Developer is not a black box. If you prefer to learn by trial and error, the worksheet will attempt to suggest the next bit of your SQL statement with it’s completion insight feature. And if you have syntax errors, those will be highlighted – just like your misspelled words in your favorite word processor. 10. It scales to match your experience level You won’t be a n00b forever. In 6-8 months, when you’re ready to tackle something a bit more complicated, like XML DB or Oracle Spatial, the tool is already there waiting on you. No need to go out and find the ‘advanced’ tool. 11. Wait, you said this was a ‘Top 10′ list? Yes. Yes, I did. I’m using this ‘trick’ to get you to continue reading because I’m going to say something you might not want to hear. Are you ready? Tools won’t replace experience, failure, hard work, and training. Just because you have the keys to the car, doesn’t mean you’re ready to head out on the race track. While SQL Developer reduces the barriers to entry, it does not completely remove them. Many experienced folks simply do not like tools. Rather, they don’t like the people that pick up tools without the know-how to properly use them. If you don’t understand what ‘TRUNCATE’ means, don’t try it out. Try picking up a book first. Of course, it’s very nice to have your own sandbox to play in, so you don’t upset the other children. That’s why I really like our Dev Days Database Virtual Box image. It’s your own database to learn and experiment with.

    Read the article

  • How to get SQL Railroad Diagrams from MSDN BNF syntax notation.

    - by Phil Factor
    pre {margin-bottom:.0001pt; font-size:8.0pt; font-family:"Courier New"; margin-left: 0cm; margin-right: 0cm; margin-top: 0cm; } On SQL Server Books-On-Line, in the Transact-SQL Reference (database Engine), every SQL Statement has its syntax represented in  ‘Backus–Naur Form’ notation (BNF)  syntax. For a programmer in a hurry, this should be ideal because It is the only quick way to understand and appreciate all the permutations of the syntax. It is a great feature once you get your eye in. It isn’t the only way to get the information;  You can, of course, reverse-engineer an understanding of the syntax from the examples, but your understanding won’t be complete, and you’ll have wasted time doing it. BNF is a good start in representing the syntax:  Oracle and SQLite go one step further, and have proper railroad diagrams for their syntax, which is a far more accessible way of doing it. There are three problems with the BNF on MSDN. Firstly, it is isn’t a standard version of  BNF, but an ancient fork from EBNF, inherited from Sybase. Secondly, it is excruciatingly difficult to understand, and thirdly it has a number of syntactic and semantic errors. The page describing DML triggers, for example, currently has the absurd BNF error that makes it state that all statements in the body of the trigger must be separated by commas.  There are a few other detail problems too. Here is the offending syntax for a DML trigger, pasted from MSDN. Trigger on an INSERT, UPDATE, or DELETE statement to a table or view (DML Trigger) CREATE TRIGGER [ schema_name . ]trigger_name ON { table | view } [ WITH <dml_trigger_option> [ ,...n ] ] { FOR | AFTER | INSTEAD OF } { [ INSERT ] [ , ] [ UPDATE ] [ , ] [ DELETE ] } [ NOT FOR REPLICATION ] AS { sql_statement [ ; ] [ ,...n ] | EXTERNAL NAME <method specifier [ ; ] > }   <dml_trigger_option> ::=     [ ENCRYPTION ]     [ EXECUTE AS Clause ]   <method_specifier> ::=  This should, of course, be /* Trigger on an INSERT, UPDATE, or DELETE statement to a table or view (DML Trigger) */ CREATE TRIGGER [ schema_name . ]trigger_name ON { table | view } [ WITH <dml_trigger_option> [ ,...n ] ] { FOR | AFTER | INSTEAD OF } { [ INSERT ] [ , ] [ UPDATE ] [ , ] [ DELETE ] } [ NOT FOR REPLICATION ] AS { {sql_statement [ ; ]} [ ...n ] | EXTERNAL NAME <method_specifier> [ ; ] }   <dml_trigger_option> ::=     [ ENCRYPTION ]     [ EXECUTE AS CLAUSE ]   <method_specifier> ::=     assembly_name.class_name.method_name I’d love to tell Microsoft when I spot errors like this so they can correct them but I can’t. Obviously, there is a mechanism on MSDN to get errors corrected by using comments, but that doesn’t work for me (*Error occurred while saving your data.”), and when I report that the comment system doesn’t work to MSDN, I get no reply. I’ve been trying to create railroad diagrams for all the important SQL Server SQL statements, as good as you’d find for Oracle, and have so far published the CREATE TABLE and ALTER TABLE railroad diagrams based on the BNF. Although I’ve been aware of them, I’ve never realised until recently how many errors there are. Then, Colin Daley created a translator for the SQL Server dialect of  BNF which outputs standard EBNF notation used by the W3C. The example MSDN BNF for the trigger would be rendered as … /* Trigger on an INSERT, UPDATE, or DELETE statement to a table or view (DML Trigger) */ create_trigger ::= 'CREATE TRIGGER' ( schema_name '.' ) ? trigger_name 'ON' ( table | view ) ( 'WITH' dml_trigger_option ( ',' dml_trigger_option ) * ) ? ( 'FOR' | 'AFTER' | 'INSTEAD OF' ) ( ( 'INSERT' ) ? ( ',' ) ? ( 'UPDATE' ) ? ( ',' ) ? ( 'DELETE' ) ? ) ( 'NOT FOR REPLICATION' ) ? 'AS' ( ( sql_statement ( ';' ) ? ) + | 'EXTERNAL NAME' method_specifier ( ';' ) ? )   dml_trigger_option ::= ( 'ENCRYPTION' ) ? ( 'EXECUTE AS CLAUSE' ) ?   method_specifier ::= assembly_name '.' class_name '.' method_name Colin’s intention was to allow anyone to paste SQL Server’s BNF notation into his website-based parser, and from this generate classic railroad diagrams via Gunther Rademacher's Railroad Diagram Generator.  Colin's application does this for you: you're not aware that you are moving to a different site.  Because Colin's 'translator' it is a parser, it will pick up syntax errors. Once you’ve fixed the syntax errors, you will get the syntax in the form of a human-readable railroad diagram and, in this form, the semantic mistakes become flamingly obvious. Gunter’s Railroad Diagram Generator is brilliant. To be able, after correcting the MSDN dialect of BNF, to generate a standard EBNF, and from thence to create railroad diagrams for SQL Server’s syntax that are as good as Oracle’s, is a great boon, and many thanks to Colin for the idea. Here is the result of the W3C EBNF from Colin’s application then being run through the Railroad diagram generator. create_trigger: dml_trigger_option: method_specifier:   Now that’s much better, you’ll agree. This is pretty easy to understand, and at this point any error is immediately obvious. This should be seriously useful, and it is to me. However  there is that snag. The BNF is generally incorrect, and you can’t expect the average visitor to mess about with it. The answer is, of course, to correct the BNF on MSDN and maybe even add railroad diagrams for the syntax. Stop giggling! I agree it won’t happen. In the meantime, we need to collaboratively store and publish these corrected syntaxes ourselves as we do them. How? GitHub?  SQL Server Central?  Simple-Talk? What should those of us who use the system  do with our corrected EBNF so that anyone can use them without hassle?

    Read the article

  • SPARC T4-4 Beats 8-CPU IBM POWER7 on TPC-H @3000GB Benchmark

    - by Brian
    Oracle's SPARC T4-4 server delivered a world record TPC-H @3000GB benchmark result for systems with four processors. This result beats eight processor results from IBM (POWER7) and HP (x86). The SPARC T4-4 server also delivered better performance per core than these eight processor systems from IBM and HP. Comparisons below are based upon system to system comparisons, highlighting Oracle's complete software and hardware solution. This database world record result used Oracle's Sun Storage 2540-M2 arrays (rotating disk) connected to a SPARC T4-4 server running Oracle Solaris 11 and Oracle Database 11g Release 2 demonstrating the power of Oracle's integrated hardware and software solution. The SPARC T4-4 server based configuration achieved a TPC-H scale factor 3000 world record for four processor systems of 205,792 QphH@3000GB with price/performance of $4.10/QphH@3000GB. The SPARC T4-4 server with four SPARC T4 processors (total of 32 cores) is 7% faster than the IBM Power 780 server with eight POWER7 processors (total of 32 cores) on the TPC-H @3000GB benchmark. The SPARC T4-4 server is 36% better in price performance compared to the IBM Power 780 server on the TPC-H @3000GB Benchmark. The SPARC T4-4 server is 29% faster than the IBM Power 780 for data loading. The SPARC T4-4 server is up to 3.4 times faster than the IBM Power 780 server for the Refresh Function. The SPARC T4-4 server with four SPARC T4 processors is 27% faster than the HP ProLiant DL980 G7 server with eight x86 processors on the TPC-H @3000GB benchmark. The SPARC T4-4 server is 52% faster than the HP ProLiant DL980 G7 server for data loading. The SPARC T4-4 server is up to 3.2 times faster than the HP ProLiant DL980 G7 for the Refresh Function. The SPARC T4-4 server achieved a peak IO rate from the Oracle database of 17 GB/sec. This rate was independent of the storage used, as demonstrated by the TPC-H @3000TB benchmark which used twelve Sun Storage 2540-M2 arrays (rotating disk) and the TPC-H @1000TB benchmark which used four Sun Storage F5100 Flash Array devices (flash storage). [*] The SPARC T4-4 server showed linear scaling from TPC-H @1000GB to TPC-H @3000GB. This demonstrates that the SPARC T4-4 server can handle the increasingly larger databases required of DSS systems. [*] The SPARC T4-4 server benchmark results demonstrate a complete solution of building Decision Support Systems including data loading, business questions and refreshing data. Each phase usually has a time constraint and the SPARC T4-4 server shows superior performance during each phase. [*] The TPC believes that comparisons of results published with different scale factors are misleading and discourages such comparisons. Performance Landscape The table lists the leading TPC-H @3000GB results for non-clustered systems. TPC-H @3000GB, Non-Clustered Systems System Processor P/C/T – Memory Composite(QphH) $/perf($/QphH) Power(QppH) Throughput(QthH) Database Available SPARC Enterprise M9000 3.0 GHz SPARC64 VII+ 64/256/256 – 1024 GB 386,478.3 $18.19 316,835.8 471,428.6 Oracle 11g R2 09/22/11 SPARC T4-4 3.0 GHz SPARC T4 4/32/256 – 1024 GB 205,792.0 $4.10 190,325.1 222,515.9 Oracle 11g R2 05/31/12 SPARC Enterprise M9000 2.88 GHz SPARC64 VII 32/128/256 – 512 GB 198,907.5 $15.27 182,350.7 216,967.7 Oracle 11g R2 12/09/10 IBM Power 780 4.1 GHz POWER7 8/32/128 – 1024 GB 192,001.1 $6.37 210,368.4 175,237.4 Sybase 15.4 11/30/11 HP ProLiant DL980 G7 2.27 GHz Intel Xeon X7560 8/64/128 – 512 GB 162,601.7 $2.68 185,297.7 142,685.6 SQL Server 2008 10/13/10 P/C/T = Processors, Cores, Threads QphH = the Composite Metric (bigger is better) $/QphH = the Price/Performance metric in USD (smaller is better) QppH = the Power Numerical Quantity QthH = the Throughput Numerical Quantity The following table lists data load times and refresh function times during the power run. TPC-H @3000GB, Non-Clustered Systems Database Load & Database Refresh System Processor Data Loading(h:m:s) T4Advan RF1(sec) T4Advan RF2(sec) T4Advan SPARC T4-4 3.0 GHz SPARC T4 04:08:29 1.0x 67.1 1.0x 39.5 1.0x IBM Power 780 4.1 GHz POWER7 05:51:50 1.5x 147.3 2.2x 133.2 3.4x HP ProLiant DL980 G7 2.27 GHz Intel Xeon X7560 08:35:17 2.1x 173.0 2.6x 126.3 3.2x Data Loading = database load time RF1 = power test first refresh transaction RF2 = power test second refresh transaction T4 Advan = the ratio of time to T4 time Complete benchmark results found at the TPC benchmark website http://www.tpc.org. Configuration Summary and Results Hardware Configuration: SPARC T4-4 server 4 x SPARC T4 3.0 GHz processors (total of 32 cores, 128 threads) 1024 GB memory 8 x internal SAS (8 x 300 GB) disk drives External Storage: 12 x Sun Storage 2540-M2 array storage, each with 12 x 15K RPM 300 GB drives, 2 controllers, 2 GB cache Software Configuration: Oracle Solaris 11 11/11 Oracle Database 11g Release 2 Enterprise Edition Audited Results: Database Size: 3000 GB (Scale Factor 3000) TPC-H Composite: 205,792.0 QphH@3000GB Price/performance: $4.10/QphH@3000GB Available: 05/31/2012 Total 3 year Cost: $843,656 TPC-H Power: 190,325.1 TPC-H Throughput: 222,515.9 Database Load Time: 4:08:29 Benchmark Description The TPC-H benchmark is a performance benchmark established by the Transaction Processing Council (TPC) to demonstrate Data Warehousing/Decision Support Systems (DSS). TPC-H measurements are produced for customers to evaluate the performance of various DSS systems. These queries and updates are executed against a standard database under controlled conditions. Performance projections and comparisons between different TPC-H Database sizes (100GB, 300GB, 1000GB, 3000GB, 10000GB, 30000GB and 100000GB) are not allowed by the TPC. TPC-H is a data warehousing-oriented, non-industry-specific benchmark that consists of a large number of complex queries typical of decision support applications. It also includes some insert and delete activity that is intended to simulate loading and purging data from a warehouse. TPC-H measures the combined performance of a particular database manager on a specific computer system. The main performance metric reported by TPC-H is called the TPC-H Composite Query-per-Hour Performance Metric (QphH@SF, where SF is the number of GB of raw data, referred to as the scale factor). QphH@SF is intended to summarize the ability of the system to process queries in both single and multiple user modes. The benchmark requires reporting of price/performance, which is the ratio of the total HW/SW cost plus 3 years maintenance to the QphH. A secondary metric is the storage efficiency, which is the ratio of total configured disk space in GB to the scale factor. Key Points and Best Practices Twelve Sun Storage 2540-M2 arrays were used for the benchmark. Each Sun Storage 2540-M2 array contains 12 15K RPM drives and is connected to a single dual port 8Gb FC HBA using 2 ports. Each Sun Storage 2540-M2 array showed 1.5 GB/sec for sequential read operations and showed linear scaling, achieving 18 GB/sec with twelve Sun Storage 2540-M2 arrays. These were stand alone IO tests. The peak IO rate measured from the Oracle database was 17 GB/sec. Oracle Solaris 11 11/11 required very little system tuning. Some vendors try to make the point that storage ratios are of customer concern. However, storage ratio size has more to do with disk layout and the increasing capacities of disks – so this is not an important metric in which to compare systems. The SPARC T4-4 server and Oracle Solaris efficiently managed the system load of over one thousand Oracle Database parallel processes. Six Sun Storage 2540-M2 arrays were mirrored to another six Sun Storage 2540-M2 arrays on which all of the Oracle database files were placed. IO performance was high and balanced across all the arrays. The TPC-H Refresh Function (RF) simulates periodical refresh portion of Data Warehouse by adding new sales and deleting old sales data. Parallel DML (parallel insert and delete in this case) and database log performance are a key for this function and the SPARC T4-4 server outperformed both the IBM POWER7 server and HP ProLiant DL980 G7 server. (See the RF columns above.) See Also Transaction Processing Performance Council (TPC) Home Page Ideas International Benchmark Page SPARC T4-4 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Sun Storage 2540-M2 Array oracle.com OTN Disclosure Statement TPC-H, QphH, $/QphH are trademarks of Transaction Processing Performance Council (TPC). For more information, see www.tpc.org. SPARC T4-4 205,792.0 QphH@3000GB, $4.10/QphH@3000GB, available 5/31/12, 4 processors, 32 cores, 256 threads; IBM Power 780 QphH@3000GB, 192,001.1 QphH@3000GB, $6.37/QphH@3000GB, available 11/30/11, 8 processors, 32 cores, 128 threads; HP ProLiant DL980 G7 162,601.7 QphH@3000GB, $2.68/QphH@3000GB available 10/13/10, 8 processors, 64 cores, 128 threads.

    Read the article

  • Using inheritance and polymorphism to solve a common game problem

    - by Barry Brown
    I have two classes; let's call them Ogre and Wizard. (All fields are public to make the example easier to type in.) public class Ogre { int weight; int height; int axeLength; } public class Wizard { int age; int IQ; int height; } In each class I can create a method called, say, battle() that will determine who will win if an Ogre meets and Ogre or a Wizard meets a Wizard. Here's an example. If an Ogre meets an Ogre, the heavier one wins. But if the weight is the same, the one with the longer axe wins. public Ogre battle(Ogre o) { if (this.height > o.height) return this; else if (this.height < o.height) return o; else if (this.axeLength > o.axeLength) return this; else if (this.axeLength < o.axeLength) return o; else return this; // default case } We can make a similar method for Wizards. But what if a Wizard meets an Ogre? We could of course make a method for that, comparing, say, just the heights. public Wizard battle(Ogre o) { if (this.height > o.height) return this; else if (this.height < o.height) return o; else return this; } And we'd make a similar one for Ogres that meet Wizard. But things get out of hand if we have to add more character types to the program. This is where I get stuck. One obvious solution is to create a Character class with the common traits. Ogre and Wizard inherit from the Character and extend it to include the other traits that define each one. public class Character { int height; public Character battle(Character c) { if (this.height > c.height) return this; else if (this.height < c.height) return c; else return this; } } Is there a better way to organize the classes? I've looked at the strategy pattern and the mediator pattern, but I'm not sure how either of them (if any) could help here. My goal is to reach some kind of common battle method, so that if an Ogre meets an Ogre it uses the Ogre-vs-Ogre battle, but if an Ogre meets a Wizard, it uses a more generic one. Further, what if the characters that meet share no common traits? How can we decide who wins a battle?

    Read the article

  • Where are the function address literals in c++?

    - by academicRobot
    First of all, maybe literals is not the right term for this concept, but its the closest I could think of (not literals in the sense of functions as first class citizens). <UPDATE> After some reading with help from answer by Chris Dodd, what I'm looking for is literal function addresses as template parameters. Chris' answer indicates how to do this for standard functions, but how can the addresses of member functions be used as template parameters? Since the standard prohibits non-static member function addresses as template parameters (c++03 14.3.2.3), I suspect the work around is quite complicated. Any ideas for a workaround? Below the original form of the question is left as is for context. </UPDATE> The idea is that when you make a conventional function call, it compiles to something like this: callq <immediate address> But if you make a function call using a function pointer, it compiles to something like this: mov <memory location>,%rax callq *%rax Which is all well and good. However, what if I'm writing a template library that requires a callback of some sort with a specified argument list and the user of the library is expected to know what function they want to call at compile time? Then I would like to write my template to accept a function literal as a template parameter. So, similar to template <int int_literal> struct my_template {...};` I'd like to write template <func_literal_t func_literal> struct my_template {...}; and have calls to func_literal within my_template compile to callq <immediate address>. Is there a facility in C++ for this, or a work around to achieve the same effect? If not, why not (e.g. some cataclysmic side effects)? How about C++0x or another language? Solutions that are not portable are fine. Solutions that include the use of member function pointers would be ideal. I'm not particularly interested in being told "You are a <socially unacceptable term for a person of low IQ>, just use function pointers/functors." This is a curiosity based question, and it seems that it might be useful in some (albeit limited) applications. It seems like this should be possible since function names are just placeholders for a (relative) memory address, so why not allow more liberal use (e.g. aliasing) of this placeholder. p.s. I use function pointers and functions objects all the the time and they are great. But this post got me thinking about the don't pay for what you don't use principle in relation to function calls, and it seems like forcing the use of function pointers or similar facility when the function is known at compile time is a violation of this principle, though a small one. Edit The intent of this question is not to implement delegates, rather to identify a pattern that will embed a conventional function call, (in immediate mode) directly into third party code, possibly a template.

    Read the article

  • CodePlex Daily Summary for Friday, June 04, 2010

    CodePlex Daily Summary for Friday, June 04, 2010New Projects23 Umbraco addons: 23 Umbraco addonsAdd-ons for EPiServer Relate+: In the Add-ons for EPiServer Relate+ you will find add-ons, extensions and modules that work together with EPiServer Relate+.Advanced Mail Merge (AMM) for Microsoft Office: Advanced Mail Merge for Microsoft Word 2007/2010, offers great extensable functionality: - Merge to document (PDF) - Merge to attachment - Use Out...Cenobith RLS Sample: Simple implementation of Row Level Security for Microsoft SQL ServerCodingWheels.DataTypes: DataTypes tries to make it easier for developers to have concrete typesafe objects for working with many common forms of data. Many times these dat...DigitArchive: Digit Archive makes it easy for the DIGIT magazine readers to find the correct software or movie bundled in the media along with the magazine. You'...dNet.DB: dNetDB is a .net framework that simplifies model and data access by providing a database independent object-based persistence, where objects are pe...Dynamic Application Framework: The Dynamic Application Framework provides a highly flexible environment for creating applications. Multiple UI and Execution Environments, along w...ECoG: ECoG toolkitFB Toolkit with Contracts: This is a research project where I have inserted code contracts into the Facebook Toolkit source code., version 3.1 beta. This delivers an efficien...GeneCMS: GeneCMS allows users to generate static HTML based websites by offering an ASP.NET editing front-end that can be run in the local machine. It is ta...HooIzDat: HooIzDat is game that asks, who the heck is that?! It's a two player game where your task is to guess your opponent's person before he or she guess...JingQiao.Interacting: JingQiao Interacting MessagingKanbanBoard: Visual task board for Kanban and Scrum.Learning CSharp: Just Learning CSharpMammoth: mammothMapWindow Mobile: MapWindow Mobile is mobile GIS Software which can run on windows mobile, developed in C# .NET Compact Framework. It provides basic GIS functionalit...Mindless Setback: Setback is a card game popular in New England. This project uses a combination of brute force and Monte Carlo methods to play Setback. This is an e...MSNCore(DirectUI) Element Viewer: MSNCore Element Viewer is an application designed to enumerate the elements with in applications built with MSNCore.dll and UXCore.dll. This appli...MSVN Team: bài tập thầy lườngNugget: Web Socket Server: A web socket server implemented in c#. The goal of the projects is to create an easy way to start using HTML5 web sockets in .NET web applications.oSoft ColorPicker Control for Visual Studio 2010: oSoft ColorPicker is an user control that can be used instead of the ColorDialog when you want to allow your users to select a color in a windows f...Prism Software Factory: The Prism Software Factory is a software factory for Visual Studio 2010 assisting developers in the process of building WPF & Silverlight applicati...Project Lion: Project lion is forum developed in Silverlight technology. Refix - .NET dependency management: Refix is an attempt to solve the problem of binary dependency management in large .NET solutions. It will achieve the goal using (amongst other thi...Rich Task List: Rich Task List is a tutorial project for DotNetNuke Module Development.SharePoint PowerRSS: Easy/Clean way to get SharePoint list data via more standard RSS feed. I found CleanRSS.aspx as part of SPRSS: Enhanced RSS Functionality for WSS ...SOAPI - StackOverflow API Generator: Generates, directly from the self documenting StackOverflow API specification, an end-to-end, fully documented API wrapper library with Visual Stu...SQL Script Application Utility: This C# project allows you to apply scripts to a database for table creation, data creation, etc. You can keep DDL in separate SQL scripts which c...Sql Server Reports Viewer: Sql Server Reports Viewer makes it easier to render Sql Server Reports without the need to setup a SSRS Server. This makes deployments a breeze. ...StorageHD: StorageHD system for large video filesUrzaGatherer: UrzaGatherer is a WPF 4.0 client application to handle Magic The Gathering cards collections. You can manage expansions, blocks and all informatio...webrel: This tool executes simple relational algebra expressions. It is useful for learning of Database course. Javascript and xhtml is used to develop thi...World Wide Grab: World Wide Grab allows retrieval and integration of various semi-structured data sorces, expecially Web applications. It turns every available res...New Releases3FD - Framework For Fast Development (C++): Alpha 3: This release was compiled in Visual Studio Release mode. It means you can use it in whatever compiler you want. However, the compatibility with ano...Advanced Mail Merge (AMM) for Microsoft Office: Advanced MailMerge 2007.zip: Release 1.1.0.0Army Bodger: Bodger 3 Archetype Test: Ok so it's later and I've largely finished it. Right now the Space Wolves have their Troops written and one HQ unit. The equipment panel largely wo...AwesomiumDotNet: AwesomiumDotNet 1.6 beta: Preview of AwesomiumDotNet 1.6.Bojinx: Bojinx Core V4.6: New features in this release: Greatly improved logging for INFO and DEBUG. Improved the getClassName function in ObjectUtils. Added the ability ...Cenobith RLS Sample: Sample App: Change connection strings in App.config and Web.config files.Christoc's DotNetNuke C# Module Development Template: 00.00.02: A minor update from the original release with a few fixes including Localization and some updated documentation.Community Forums NNTP bridge: Community Forums NNTP Bridge V25: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...DEWD: DEWD for Umbraco v1.0: Beta release of the package. Functional feature set and fairly stable. Since the alpha: Validation on input fields Custom view controls Ability...DotNetNuke Developers Help File: DNNHelpSystem 05.04.02: Release of the developer core API help documentation of DotNetNuke in MSDN style format, both as .CHM stand alone file as well as a html website ba...Drive Backup: Drive Backup V.0604: This release includes the following fixes/features: * Fixed incompatibility with some USB drives (those marked as “fixed” by Windows) * Ad...Event Scavenger: Version 3.3 (Refresh): Archiving bit added to database plus archiving stored procedure updated. Rest of items just refreshed. Database set to version 3.3Expression Encoder Batch Processor: Expression Batch v0.3: Now set the newly-converted file's Created DateTime to equal the source file's. This helps keep your videos sorterd chronologically in media librar...Folder Bookmarks: Folder Bookmarks 1.6.1: The latest version of Folder Bookmarks (1.6.1), with Mini-Menu bug fixes and 'Help' feature - all the instructions needed to use the software (If y...Genesis Smart Client Framework: Genesis v2.0 - Ruby User Experience Platform (UXP): This is the start of the rewrite of the entire framework. The rewrite will include support for XAML through WPF and Silverlight, WCF, Workflow Serv...Global: http requester tool: Added a brnad new console app for making http requests.GMap.NET - Great Maps for Windows Forms & Presentation: Hot Build: this is latest change-set build, unstable previewHERB.IQ: Alpha 0.1 Source code release 4: As of 6-23-10 @ 9:48ESTInfragistics Analytics Framework: Infragistics Analytics Framework 1.0: This project includes wrappers for the Infragistics controls that integrate with the recently launched Microsoft Silverlight Analytics Framework. T...Innovative Games: Cube Mapper: Cube Mapper is a small tool that takes in six textures and outputs a cube map that is a combination of the six textures. Cube Mapper supports .tga...jQuery Library for SharePoint Web Services: SPServices 0.5.6: This release is in an alpha state. Please only download it if you know what you are getting and are willing to test it. In any case, it's a bad ide...linq to jquery: jlinq v1.00 no doc: First public version of jlinq! no doc yet, soon too come!LinqSpecs: Version 1.0.1: Fabio Maulo has sent several patchs in order to make LinqSpecs to work with any linq provider other than in memory. Big KUDOS for him.mojoPortal: 2.3.4.4: see release notes on mojoportal.com Note that we have separate deployment packages for .NET 3.5 and .NET 4.0 The deployment package downloads on ...Nugget: Web Socket Server: Initial POC release: The initial proof of concept release. To try it out, open the Sample App.sln, set the ChatServer project as the start-up project, start debugging ...oSoft ColorPicker Control for Visual Studio 2010: oSoft ColorPicker Control for VS 2010 Beta 1: Beta 1Refix - .NET dependency management: Refix v0.1.0.48 ALPHA: First preview version of Refix command-line tool.SharePoint 2010 CSV Bulk Term Set Importer: Bulk Term Set Importer: Initial ReleaseSOAPI - StackOverflow API Generator: SOAPI Wrappers: SOAPI-JS First release as SOAPI-JS, SOAPI-CS coming shortly. Tests and example includedSQL Compact Toolbox: Beta 0.8.1: Initial test release - mind the bumps. Requires Visual Studio 2010.Thumb nail creator and image resizer: ThumbnailCreator1.2: this release fixes and issue that was occuring when the control was used inside paged dataTS3QueryLib.Net: TS3QueryLib.Net Version 0.23.17.0: Changelog Added Properties "IsSpacer" and "SpacerInfo" to ChannelListEntry. "IsSpacer" allows you to check whether the channel is a spacer channel ...UI Accessibility Checker: UI Accessibility Checker v.2.0: We are excited to announce the release of AccChecker 2.0! In addition to numerous bug fixes and usability improvements, these major features have...webrel: webrel 1.0: webrel 1.0WindStyle SlugHelper for Windows Live Writer: 1.2.0.0: 增加:可以配置是否忽略已经包含slug的日志,请在插件选项中配置; 增加:插件图标; 更新:支持最新Windows Live Writer,版本号14.0.8117.416。Work Recorder - Hold on own time!: WorkRecorder 1.1: +Only one instance can run #Change histogram to pie chartMost Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)PHPExcelpatterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesASP.NETMost Active ProjectsCommunity Forums NNTP bridgeRawrIonics Isapi Rewrite Filterpatterns & practices – Enterprise LibraryGMap.NET - Great Maps for Windows Forms & PresentationN2 CMSBlogEngine.NETFarseer Physics EngineMain projectMirror Testing System

    Read the article

  • CodePlex Daily Summary for Wednesday, June 09, 2010

    CodePlex Daily Summary for Wednesday, June 09, 2010New Projects.NET Transactional File Manager: Transactional File Manager is a .NET API that supports including file system operations such as file copy, move, delete in a transaction. It's an i...3D World Studio Content Pipeline for Windows Phone 7: This is a port of PhotonicGames' project: http://xna3dws.codeplex.com/releases/view/42994 for the Windows Phone 7 tools (XNA 4.0 CTP).Advanced Script Editor for 3D Rad: Advanced Script Editor makes it easier for 3D Rad coders to write scripts. Developed in C#, it features a functions list, a favourites list, object...Ajax ASP.Net Forum: A fast & lightweight open source free forum developed in ASP.Net 3.5, AJAX, CSS, SQL & Javascript Cache (filter-sort-move through table records at ...Axon: Axon is the home automation system that I will be running in my home. It will be a collection of different technologies and projects, often experim...BigBallz: Projeto de site de Bolões para campeonatos diversos. A princípio pensado para copa do mundo de futebol de 2010BigfootMVC: MVC Framework for DotNetNukeBigfootSQL: A StringBuilder for SQL. BigfootSQL was built with simplicity in mind. It assumes that you are comfortable writing SQL but dislike effort required ...Bxf (Basic XAML Framework): Basic Xaml Framework (Bxf) is a simple, streamlined set of UI components designed to demonstrate the minimum framework functionality required to ma...elZerf - elektronische Zeiterfassung: elektronisches Zeiterfassungsystem im Rahmen der Seminararbeit im Modul Web-Anwendungsprogrammierung.IntoFactories.Net - Samples: Project to host samples created by members of the IntoFactories.NET Team blog.Lanchonete: Sistema para controle de lanchonetes. Medieval Dynasties: Medieval Dynasties is a game written in C# 3.5 and XNA 3.1 at the moment. It is inspired by Crusader Kings, Total War and Civilization.PMMsg: A project to replace the standard messaging client on the Windows Mobile platform. Mainly geared towards Windows Mobile 6.5.3 VGA devices. Also an...PunkPong: PunkPong is an open source "Pong" alike game totally written in DHTML (JavaScript, CSS and HTML) that uses keyboard or mouse. This cross-platform a...Renegade Legion Fighter Calculator: In working on assigning fighters to squadrons, flights, and groups for a campaign, I was struck by the sheer amount of calculations I had to make. ...Sharpotify - Spotify .Net Library: Sharpotify is a Spotify library in C#. It is based in Jotify and SharPot projects. It is not a libspotify wrapper, It is a full .Net Spotify protoc...Silverlight load on demand with MEF: With MEF, a Silverlight control can be split in several packages(xap files). Each package can contain one or more pages and it will download on dem...SOLID by example: Source code examples to undestood solid design principles. Most of them were taken from http://www.lostechies.com/SQL Server 2008 Reporting Services RS.EXE Supporting Forms Authentication: A version of RS.EXE that you can use with Forms Authentication in Native Mode. Use the following arguments to specify credentials (just like Basic ...Stripper: Stripper Remove Diacritics and other unwanted caracter to fabric a more standardized file naming.study: studyUncoverPIC: UncoverPIC is a Silverlight Game strongly inspired to the famous Arcade Game "GalsPanic" (see http://en.wikipedia.org/wiki/Gals_Panic ). It was dev...Unity3D Untitled MMO: Unity3D Untitled MMO FrameworkUnnamedShop: UnnamedShopXBStudio.asp.net.automation: A Unit Testing Automation library for asp.netXBStudio.Web: XBStudio Web ApplicationNew Releases3D World Studio Content Pipeline for Windows Phone 7: Initial Release (0.1): This is the first release of the project, with plenty of hackery and kludges to go around, but it mostly works! Let me know if you hit any bugs.Acies: Acies - Alpha Build 0.0.10: Alpha release. Requires Microsoft XNA Framework Redistributable 3.1 (http://www.microsoft.com/downloads/details.aspx?FamilyID=53867a2a-e249-4560-...Advanced Script Editor for 3D Rad: Advanced Script Editor - Version 2.6: Despite various previous releases on the 3D Rad forum, this is the first release on CodePlex.Ajax ASP.Net Forum: First Release: First Release prior to CodePlex Publish (send to admins)So, it doesn't all finish VERSION: 0.1.2 FEATURES Main Home Where all the Forums (called ...Artist Follower for Microsoft Access: Artist Follower 0.5.1: Artists Follower changes: Just one form to manage artists and links!!!Artist Follower for Microsoft Access: Artists Follower 0.5.0: This is the first release of Artist Follower.ASP.NET MVC SiteMap provider - MvcSiteMapProvider: MvcSiteMapProvider 2.0.0 CTP1: This is a community technology preview of MvcSiteMapProvider version 2.0. It is not backwards compatible with older MvcSiteMapProvider versions. ...B&W Port Scanner: Black`n`White Port Scanner 4.0: Version 4 includes: - Improved vulnerability detection tools - Report Creation - Improved Stability - Much better port information database - Nume...BaseCalendar: BaseControls 1.1: BaseControls 1.1 contains the BaseCalendar ASP.NET control. Changes: Rendering TH by default inside THEAD. Added option (ShowMinNumWeeks) to r...BigfootSQL: BigfootSQL Source Code: BigfootSQL C# Version 01Commerce Server 2009 Orders using Pipelines in a Console Application: ConsoleApplication To PLace Orders: ConsoleApplication To PLace Orders with Commerce Server 2009 foundationCommunity Forums NNTP bridge: Community Forums NNTP Bridge V33: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...Community Forums NNTP bridge: Community Forums NNTP Bridge V34: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...ContainerOne - C# application server: V0.1.2.0: New minor release containing: Integration test component for runtime testing Refactored and cleaned solution files First unit testsExtend SmallBasic: Teaching Extensions v.020: Moved Tortoise.approve to ProgramWindow.TakeScreenShot()fleet It: v0.06 Alpha: v0.06 Alpha - Features Caching implemented for fleets Various Bug fixes Implemented Settings. Resolved logical issue with Getting fleets U...Frotz.NET: Frotz.NET B2: In addition to B1 changes: - Added ZTools to enable debugging view of zcode files - Added rudimentary scroll back buffer. B1 Changes: - Got Adapt...FsObserver: FsObserver 2.0: This is basically the same as FsObserver 1.0 but the "-help" documentation has been cleaned up somewhat and the code has been refactored so that it...GPdotNET - Genetic Programming Tool: GPdotNETv1.0: GPdotNET v.1.0 - more details on http.bhrnjica.wordpress.com/gpdotnetHERB.IQ: Alpha 0.1 Source code release 8: Alpha 0.1 Source code release 8imdb movie downloader: myImdb 0.9.3: myImdb 0.9.3imdb movie downloader: myImdb 0.9.4: myImdb 0.9.4jccc .NET smart framework: jccc .NET smart framework version 1.2010.06.07: jccc .NET smart framework version 1.2010.06.07 added oracle databases supportLongBar: LongBar 2.1 Build 313: - Fixed library and updates to work with updated live services - Options: You can disable shadow nowMDownloader: MDownloader-0.15.17.59623: Fixed FileFactory provider. Improvied postpone policies. Added network request limiter.MediaCoder.NET: MediaCoder.NET v1.0 beta 1.1: Installer for MediaCoder.NET v1.0 beta1.1. It can now convert files with spaces in the path or filename. I have also created filter for the SaveFil...MediaCoder.NET: MediaCoder.NET v1.0 beta 1.1 Source Code: Source Code for MediaCoder.NET v1.0 beta 1.1.mesoBoard: mesoBoard - 0.9.1 beta: Fixed file download permissions Released under the New BSD License.MPCLI: Alpha Release (0.1.0.0): This release has core functionality and is considered a potential candidate for a feature complete release of this library. However, suggestions fo...N2 CMS: 2.0: N2 is a lightweight CMS framework for ASP.NET. It helps professional developers build great web sites that anyone can update. Major Changes (1.5 -...NHTrace: NHTrace-47571: NHTrace-47571NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Class Libraries, version 1.0.1.125: The NodeXL class libraries can be used to display network graphs in .NET applications. To include a NodeXL network graph in a WPF desktop or Windo...NSoup: NSoup 0.2: NSoup 0.2 corresponds to jsoup version 1.1.1. List of changes can be viewed here.Opalis Community Releases: Integration Pack for Data Manipulation: The Integration Pack for Data Manipulation enables you to perform a wider variety of data manipulation tasks as well as aggregate data into common ...Performance Analysis of Logs (PAL) Tool: PAL v2.0 Beta 1: Fixed Counter Sorting: Fixed a minor bug where duplicate counter expression paths were not being removed. Analysis Added: Added LogicalDisk Read/...RoTwee: RoTwee (12.0.0.0): Trial version. 17925 Make it possible to change window sizeSharpotify - Spotify .Net Library: Sharpotify.Library 1.0: Sharpotify Library: Stable release. You can connect with spotify, search, browse (tracks, albums, artists), get a music stream, create and edit you...Silverlight load on demand with MEF: mal.Web.Silverlight.MEF 1.0.0.0: mal.Web.Silverlight.MEF 1.0.0.0sMAPtool: sMAPtool v0.7e (without Maps): + Added: color value expansion bar for hmap (right click to select color scheme) + Added: more complex hmap editing, uses now 4 point bounding rect...SQL Server 2008 Reporting Services RS.EXE Supporting Forms Authentication: Initial release: Enjoy!Stripper: Stripper 0.1.1 (CLi): Stripper Remove Diacritics and other unwanted caracters to fabric a more standardized file naming. Especially French caracter and maybe other lang...Unity3D Untitled MMO: v1: versionUrzaGatherer: UrzaGatherer 2.01a: New version with some minors bugs corrected.VCC: Latest build, v2.1.30608.0: Automatic drop of latest buildVCC: Latest build, v2.1.30608.1: Automatic drop of latest buildWatermarker.NET: 0.1.3811: A newer version with some improvements. I release this as a .zip archive, because settings are added here, so there will be .exe and .config files.Yet Another GPS: Alfa Release: Alfa working releaseMost Popular ProjectsDozer Enterprise Library for .NETEmployee Management SystemWiiMote PhysicsVisualStudio 2010 JavaScript OutliningSpider CompilerConcurrent CacheOil Slick Live FeedsCSUFVGDC Summer JamWinGetSiteMap Utility for DNN Blog ModuleMost Active ProjectsCommunity Forums NNTP bridgepatterns & practices – Enterprise LibraryRhyduino - Arduino and Managed CodejQuery Library for SharePoint Web ServicesRawrNB_Store - Free DotNetNuke Ecommerce Catalog ModuleAndrew's XNA HelpersBlogEngine.NETStyleCopCustomer Portal Accelerator for Microsoft Dynamics CRM

    Read the article

  • CodePlex Daily Summary for Thursday, June 03, 2010

    CodePlex Daily Summary for Thursday, June 03, 2010New ProjectsAlbatross: Albatross framework. We are still working on the documentation, more details will be available soon.ApiChange: ApiChange is the Swiss army knife for inspecting your assemblies from the command line. Now you can do basic operations like diff, who uses (method...BaseCalendar: BaseCalendar is a server-side ASP.NET web control (WebForms or MVC) that renders a calendar while giving you full control over the generated HTML. ...CESAVE: Proyectos para el Comité Estatal de Sanidad Vegetal.Closure Compiler w/ Annotations Visual Studio 2010 Snippets: This is an attempt to create reusable Visual Studio snippets to make working with closure compiler annotated JavaScript more productive. VS2010 ...Common Service Host: Common Service Host is a generic Windows Communication Service Host and factory that uses the Common Service Locator to create Service objects. ...DarkLight: DarkLight is a 2D Lighting Engine written in XNA, and allows developers to create 2D shadowing effects in their 2D games easily. It supports poi...Earn Burn Tracker: A tool to track earned value against a given release, initiative, feature set, and objects.eOfficeAACS: eOffice is an open source access control and attendance management system developed by e-bird Innovation (www.ebirdinfo.com).Its flexible design al...FLV Video conversion library for .Net 3.5: This is a component created to call the ffmpeg tool to convert various video formats to the Adobe Flash FLV output format. The component also takes...Google Moderator: .NET client library for the Google Moderator API.linq to jquery: provides support for linq to jquery objectsMobile Vikings Data: App to view your data usage RefBrowser: RefBrowserRESX Translator with Bing (from Microsoft Consulting Services, UK): A Windows Form application that automatically translates RESX files using Bing web servicesRhyduino - Remote Arduino Control via Managed Code: Rhyduino makes it easy for Visual Studio / Windows devs to control the Arduino using a computer. It's like supercharging your Arduino with all the ...SharePoint 2010 CSV Bulk Term Set Importer: Allows for multiple import of *.csv files to a given term group in SharePoint 2010 Term Store. It will create new term group based on the name pr...SharePoint Feature - Export history version to Excel: Add a function to list the action button, the ability to export history version of the item sheet to Excel from the specified date. Features suppo...SwEntry: A system that allows people to open doors by using a Bluetooth enabled phone. Things to Do with the DLR: This project is about ideas and sample code around the Dynamic Language Runtime.Work Recorder - Hold on own time!: Work Recorder is a office aid software which can recorde the time used on PC for researchers, office workers and students. And it is also a good he...xuezhixu: xuezhixu foundYaget: Yet Another Game Engine TechnologyNew ReleasesBackUpAnyWhere: backupanywhere RC1: this is the RC of our programBaseCalendar: BaseControls 1.0: BaseControls 1.0 contains the BaseCalendar ASP.NET control.BizTalk Server Pipeline Component Wizard: 2.20: Version suitable for 2010 release.CheckHeader: CheckHeader v0.8.6: The Microsoft .NET Framework 4.0 is needed to run this program.Chirpy - VS Add In For Handling Js, Css, and DotLess Files: Chirpy Installer for VS 2010 (Ver-1.0.0.2): VS 2010 Installer for the Chirpy AddIn. Version 1.0.0.2Christoc's DotNetNuke C# Module Development Template: 00.00.01: This is the initial release of Christoc's DotNetNuke C# Module Development Template. You can use the Template as-is, or you can customize the VSTem...Closure Compiler w/ Annotations Visual Studio 2010 Snippets: v1 release: The initial release of the projectCommunity Forums NNTP bridge: Community Forums NNTP Bridge V22: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...Community Forums NNTP bridge: Community Forums NNTP Bridge V23: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...Community Forums NNTP bridge: Community Forums NNTP Bridge V24: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...DarkLight: DarkLight Engine v1.0: This is the first version of the DarkLight engine and currently supports point, spot and area lights with no upper limit on the number of lights. ...DotNetNuke® Skin Collaborate: Collaborate Package 1.1.0: Newer version of Collaborate included fixes: - removed conditional code to display control panel - changed background color to match with backgroun...dotSpatial: System.Spatial.Projection Zip June 2, 2010: This version tries to fix a problem with reprojecting to UTM zones. It is still being tested though.Entity Framework Repository & Unit of Work Template: 1.0.1: This version has more than just the T4 template. I have added a new template that has a RepositoryHelper class for use with StructureMap. Also th...FLV Video conversion library for .Net 3.5: Beta 1: This is the first release of this project. Improvements may be added if necessary.HERB.IQ: Alpha 0.1 Preview: Only clone tab works, just setting up the GUI and getting the XML data handling working correctlyJetfire - Workflow DSL: V1.2.0: The complete source code required for a Jetfire system (server and client nexus) is included in the release. Highlights of Changes Full programmat...linq to jquery: linq to jquery alpha: beta development projectMapWindow6: MapWindow 6.0 June 2, 2010: This version fixes a problem with projecting to UTM zones. I'm not sure that this works perfectly yet. It seemed to require a zone adjustment by ...patterns & practices Web Client Developer Guidance: Developing Web Apps May 2010 Beta: This RelesaeThis drop includes updated documentation, links, and graphics. We are still looking for feedback on this release. Plans going forward...patterns & practices: Composite WPF and Silverlight: Prism 4.0 Drop 1: Prism 4.0 Drop 1 Welcome to the first drop of Prism 4.0 (formally known as the Composite Application Guidance for WPF and Silverlight). This drop i...Powershell4SQL: Version 1.3: Changes from version 1.2 Added support for -Confirm and -WhatIf parameters Added support for -Verbose mode. Includes SQL Batch text, parameters ...RESX Translator with Bing (from Microsoft Consulting Services, UK): v1.0: This is the initial release of the toolRhyduino - Remote Arduino Control via Managed Code: Beta Release (v0.80): LibraryAuto-detects connected Arduino devices. Uses system resources intelligently to take advantage of multiple CPU cores when present. Firmata ...SharePoint Feature - Export history version to Excel: Export Item List Version: - multilanguage support Czech, English Install: "C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\BIN\stsadm.exe" -o addsol...Simulo: Simulo v2.5: That's the third release of Simulo (v2.5). For detailed info on what's new, read the changes log block at the project's home page. System requirem...Site Directory for SharePoint 2010 (from Microsoft Consulting Services, UK): v1.5: Please carefully follow the Installation Guideas there are additional actions that need to be undertaken in this release. As 1.4 with the followin...Spackle.NET: 4.1.0.0 Release: Added IEquatable<T> to Range<T>StreamInsight Samples: Microsoft StreamInsight Product Team Samples: This is the current snapshot of the samples created by the Streaminsight Product Team.Touch Mice: 0.1: Initial release of Touch MiceVCC: Latest build, v2.1.30602.0: Automatic drop of latest buildVivoSocial: VivoSocial 7.2.0: Version 7.2.0 of VivoSocial has been released. If you experienced any issues with the previous version, please update your modules to the 7.2.0 rel...Work Recorder - Hold on own time!: WorkRecorder 1.0: +Finished Version 1.0Most Popular ProjectsCommunity Forums NNTP bridgeOutSyncASP.NET MVC Time PlannerNeatUploadMoonyDesk (windows desktop widgets)Mute4eXpress Persistent Objects (XPO) ToolkitAgUnit - Silverlight unit testing with ReSharperASP.NET MVC ExtensionsAviva Solutions C# Coding GuidelinesMost Active ProjectsCommunity Forums NNTP bridgeGMap.NET - Great Maps for Windows Forms & PresentationRawrIonics Isapi Rewrite FilterN2 CMSpatterns & practices – Enterprise LibraryBlogEngine.NETGameSetFarseer Physics EngineMirror Testing System

    Read the article

  • Apache2 Segfault - need help interpreting this coredump (suspect cause is memcache / php session related)

    - by WayneDV
    Three Apache2 web servers running a PHP 5.2.3 web site. We're using Memcache to cache rendered pages but also as the storage engine of the PHP Sessions. At peak traffic times we're getting Apache segmentation faults on all three web servers and all HTTPD child processes segfault. My gut tells me that the increased Memcache traffic is stopping PHP sessions from being created or cleaned up and thus the processes die. Is it possible for someone to confirm that from the following? : #0 _zend_mm_free_int (heap=0x7fb67a075820, p=0x7fb67a011538) at /usr/src/debug/php-5.3.3/Zend/zend_alloc.c:2018 #1 0x00007fb665d02e82 in mmc_buffer_free (request=0x7fb67a011548) at /usr/src/debug/php-pecl-memcache-3.0.4/memcache-3.0.4/memcache_pool.c:50 #2 mmc_request_free (request=0x7fb67a011548) at /usr/src/debug/php-pecl-memcache-3.0.4/memcache-3.0.4/memcache_pool.c:169 #3 0x00007fb665d031ea in mmc_pool_free (pool=0x7fb67a00e458) at /usr/src/debug/php-pecl-memcache-3.0.4/memcache-3.0.4/memcache_pool.c:917 #4 0x00007fb665d0a2f1 in ps_close_memcache (mod_data=0x7fb66d625440) at /usr/src/debug/php-pecl-memcache-3.0.4/memcache-3.0.4/memcache_session.c:185 #5 0x00007fb66d1b0935 in php_session_save_current_state () at /usr/src/debug/php-5.3.3/ext/session/session.c:625 #6 php_session_flush () at /usr/src/debug/php-5.3.3/ext/session/session.c:1517 #7 0x00007fb66d1b0c1b in zm_deactivate_session (type=<value optimized out>, module_number=<value optimized out>) at /usr/src/debug/php-5.3.3/ext/session/session.c:2171 #8 0x00007fb66d2a719c in module_registry_cleanup (module=<value optimized out>) at /usr/src/debug/php-5.3.3/Zend/zend_API.c:2150 #9 0x00007fb66d2b1994 in zend_hash_reverse_apply (ht=0x7fb66d629d60, apply_func=0x7fb66d2a7180 <module_registry_cleanup>) at /usr/src/debug/php-5.3.3/Zend/zend_hash.c:755 #10 0x00007fb66d2a5c0d in zend_deactivate_modules () at /usr/src/debug/php-5.3.3/Zend/zend.c:866 #11 0x00007fb66d2541b5 in php_request_shutdown (dummy=<value optimized out>) at /usr/src/debug/php-5.3.3/main/main.c:1607 #12 0x00007fb66d32e037 in php_apache_request_dtor (r=0x7fb67a229658) at /usr/src/debug/php-5.3.3/sapi/apache2handler/sapi_apache2.c:509 #13 php_handler (r=0x7fb67a229658) at /usr/src/debug/php-5.3.3/sapi/apache2handler/sapi_apache2.c:681 #14 0x00007fb6784166f0 in ap_run_handler (r=0x7fb67a229658) at /usr/src/debug/httpd-2.2.15/server/config.c:158 #15 0x00007fb678419f58 in ap_invoke_handler (r=0x7fb67a229658) at /usr/src/debug/httpd-2.2.15/server/config.c:372 #16 0x00007fb6784254f0 in ap_process_request (r=0x7fb67a229658) at /usr/src/debug/httpd-2.2.15/modules/http/http_request.c:282 #17 0x00007fb678422418 in ap_process_http_connection (c=0x7fb67a2193a8) at /usr/src/debug/httpd-2.2.15/modules/http/http_core.c:190 #18 0x00007fb67841e1b8 in ap_run_process_connection (c=0x7fb67a2193a8) at /usr/src/debug/httpd-2.2.15/server/connection.c:43 #19 0x00007fb678429f4b in child_main (child_num_arg=<value optimized out>) at /usr/src/debug/httpd-2.2.15/server/mpm/prefork/prefork.c:662 #20 0x00007fb67842a21a in make_child (s=0x7fb679cd7860, slot=153) at /usr/src/debug/httpd-2.2.15/server/mpm/prefork/prefork.c:758 #21 0x00007fb67842aea4 in perform_idle_server_maintenance (_pconf=<value optimized out>, plog=<value optimized out>, s=<value optimized out>) at /usr/src/debug/httpd-2.2.15/server/mpm/prefork/prefork.c:893 #22 ap_mpm_run (_pconf=<value optimized out>, plog=<value optimized out>, s=<value optimized out>) at /usr/src/debug/httpd-2.2.15/server/mpm/prefork/prefork.c:1097 #23 0x00007fb678402890 in main (argc=1, argv=0x7fff6fecacb8) at /usr/src/debug/httpd-2.2.15/server/main.c:740 PHP.INI Follows: [PHP] engine = On short_open_tag = On asp_tags = Off precision = 14 y2k_compliance = On output_buffering = 4096 zlib.output_compression = Off implicit_flush = Off unserialize_callback_func = serialize_precision = 100 allow_call_time_pass_reference = Off safe_mode = Off safe_mode_gid = Off safe_mode_include_dir = safe_mode_exec_dir = safe_mode_allowed_env_vars = PHP_ safe_mode_protected_env_vars = LD_LIBRARY_PATH disable_functions = disable_classes = expose_php = On max_execution_time = 30 max_input_time = 60 memory_limit = 128M error_reporting = E_ALL & ~E_DEPRECATED display_errors = Off display_startup_errors = Off log_errors = Off log_errors_max_len = 1024 ignore_repeated_errors = Off ignore_repeated_source = Off report_memleaks = On track_errors = Off html_errors = Off variables_order = "GPCS" request_order = "GP" register_globals = Off register_long_arrays = Off register_argc_argv = Off auto_globals_jit = On post_max_size = 8M magic_quotes_gpc = Off magic_quotes_runtime = Off magic_quotes_sybase = Off auto_prepend_file = auto_append_file = default_mimetype = "text/html" doc_root = user_dir = enable_dl = Off file_uploads = On upload_max_filesize = 2M allow_url_fopen = On allow_url_include = Off default_socket_timeout = 60 [Date] [filter] [iconv] [intl] [sqlite] [sqlite3] [Pcre] [Pdo] [Phar] [Syslog] define_syslog_variables = Off [mail function] SMTP = localhost smtp_port = 25 sendmail_path = /usr/sbin/sendmail -t -i mail.add_x_header = On [SQL] sql.safe_mode = Off [ODBC] odbc.allow_persistent = On odbc.check_persistent = On odbc.max_persistent = -1 odbc.max_links = -1 odbc.defaultlrl = 4096 odbc.defaultbinmode = 1 [MySQL] mysql.allow_persistent = On mysql.max_persistent = -1 mysql.max_links = -1 mysql.default_port = mysql.default_socket = mysql.default_host = mysql.default_user = mysql.default_password = mysql.connect_timeout = 60 mysql.trace_mode = Off [MySQLi] mysqli.max_links = -1 mysqli.default_port = 3306 mysqli.default_socket = mysqli.default_host = mysqli.default_user = mysqli.default_pw = mysqli.reconnect = Off [PostgresSQL] pgsql.allow_persistent = On pgsql.auto_reset_persistent = Off pgsql.max_persistent = -1 pgsql.max_links = -1 pgsql.ignore_notice = 0 pgsql.log_notice = 0 [Sybase-CT] sybct.allow_persistent = On sybct.max_persistent = -1 sybct.max_links = -1 sybct.min_server_severity = 10 sybct.min_client_severity = 10 [bcmath] bcmath.scale = 0 [browscap] [Session] session.save_handler = files session.save_path = "/var/lib/php/session" session.use_cookies = 1 session.use_only_cookies = 1 session.name = PHPSESSID session.auto_start = 1 session.cookie_lifetime = 0 session.cookie_path = / session.cookie_domain = session.cookie_httponly = session.serialize_handler = php session.gc_probability = 1 session.gc_divisor = 1000 session.gc_maxlifetime = 1440 session.bug_compat_42 = Off session.bug_compat_warn = Off session.referer_check = session.entropy_length = 0 session.entropy_file = session.cache_limiter = nocache session.cache_expire = 180 session.use_trans_sid = 0 session.hash_function = 0 session.hash_bits_per_character = 5 url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=fakeentry" [MSSQL] mssql.allow_persistent = On mssql.max_persistent = -1 mssql.max_links = -1 mssql.min_error_severity = 10 mssql.min_message_severity = 10 mssql.compatability_mode = Off mssql.secure_connection = Off [Assertion] [COM] [mbstring] [gd] [exif] [Tidy] tidy.clean_output = Off [soap] soap.wsdl_cache_enabled=1 soap.wsdl_cache_dir="/tmp" soap.wsdl_cache_ttl=86400 /etc/php.d/memcached.ini : session.save_path="tcp://memcache1:11211?persistent=1&weight=1&timeout=3&retry_interval=15"

    Read the article

  • 500 Internal Server Error with PHP application

    - by James
    I have written a PHP application using Windows and XAMPP. I've been trying to run it on Ubuntu 10.10 with Lighttpd 1.4.26. Parts of the application work fine, but whenever I try to log in, I get a 500 - Internal Server Error page. The only thing that shows up in /var/log/lighttpd/error.log is 2011-02-25 13:43:13: (mod_fastcgi.c.2582) unexpected end-of-file (perhaps the fastcgi process died): pid: 1169 socket: unix:/tmp/php.socket-0 2011-02-25 13:43:13: (mod_fastcgi.c.3367) response not received, request sent: 1596 on socket: unix:/tmp/php.socket-0 for /~denton/customer-facing-portal/index.php?, closing connection If I had any output whatsoever from PHP, this would be a lot easier to debug. Any ideas on how to get some? Here is my /etc/lighttpd/lighttpd.conf file: # Debian lighttpd configuration file # ############ Options you really have to take care of #################### ## modules to load server.modules = ( "mod_alias", "mod_compress", # "mod_rewrite", # "mod_redirect", # "mod_usertrack", # "mod_expire", # "mod_flv_streaming", # "mod_evasive", "mod_setenv" ) ## a static document-root, for virtual-hosting take look at the ## server.virtual-* options server.document-root = "/var/www/" ## where to upload files to, purged daily. server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) ## where to send error-messages to server.errorlog = "/var/log/lighttpd/error.log" ## files to check for if .../ is requested index-file.names = ( "index.php", "index.html", "index.htm", "default.htm", "index.lighttpd.html" ) ## Use the "Content-Type" extended attribute to obtain mime type if possible # mimetype.use-xattr = "enable" ## # which extensions should not be handle via static-file transfer # # .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) ######### Options that are good to be but not neccesary to be changed ####### ## Use ipv6 only if available. (disabled for while, check #560837) #include_shell "/usr/share/lighttpd/use-ipv6.pl" ## bind to port (default: 80) # server.port = 81 ## bind to localhost only (default: all interfaces) ## server.bind = "localhost" ## error-handler for status 404 #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## to help the rc.scripts server.pid-file = "/var/run/lighttpd.pid" ## ## Format: <errorfile-prefix><status>.html ## -> ..../status-404.html for 'File not found' #server.errorfile-prefix = "/var/www/" ## virtual directory listings dir-listing.encoding = "utf-8" server.dir-listing = "enable" ### only root can use these options # # chroot() to directory (default: no chroot() ) #server.chroot = "/" ## change uid to <uid> (default: don't change) server.username = "www-data" ## change gid to <gid> (default: don't change) server.groupname = "www-data" #### compress module compress.cache-dir = "/var/cache/lighttpd/compress/" compress.filetype = ("text/plain", "text/html", "application/x-javascript", "text/css") #### url handling modules (rewrite, redirect, access) # url.rewrite = ( "^/$" => "/server-status" ) # url.redirect = ( "^/wishlist/(.+)" => "http://www.123.org/$1" ) #### expire module # expire.url = ( "/buggy/" => "access 2 hours", "/asdhas/" => "access plus 1 seconds 2 minutes") #### external configuration files ## mimetype mapping include_shell "/usr/share/lighttpd/create-mime.assign.pl" ## load enabled configuration files, ## read /etc/lighttpd/conf-available/README first include_shell "/usr/share/lighttpd/include-conf-enabled.pl" ## Set environment variables setenv.add-environment = ( "DB_URL__DEMO" => "192.168.1.231", "DB_NAME_DEMO" => "demo", "DB_USER_DEMO" => "user", "DB_PASS_DEMO" => "password", "DB_AGENCY_DEMO" => "demo" ) Here is my /etc/php5/cgi/php.ini file (sans 1641 lines of comments): [PHP] register_long_arrays = Off short_open_tag = Off engine = On short_open_tag = Off asp_tags = Off precision = 14 y2k_compliance = On output_buffering = 4096 zlib.output_compression = Off implicit_flush = Off unserialize_callback_func = serialize_precision = 100 allow_call_time_pass_reference = Off safe_mode = Off safe_mode_gid = Off safe_mode_include_dir = safe_mode_exec_dir = safe_mode_allowed_env_vars = PHP_ safe_mode_protected_env_vars = LD_LIBRARY_PATH disable_functions = disable_classes = expose_php = On max_execution_time = 30 max_input_time = 60 memory_limit = 128M error_reporting = E_ALL & ~E_DEPRECATED & ~E_STRICT display_errors = On display_startup_errors = On log_errors = On log_errors_max_len = 1024 ignore_repeated_errors = Off ignore_repeated_source = Off report_memleaks = On track_errors = On html_errors = On variables_order = "GPCS" request_order = "GP" register_globals = Off register_long_arrays = Off register_argc_argv = Off auto_globals_jit = On post_max_size = 8M magic_quotes_gpc = Off magic_quotes_runtime = Off magic_quotes_sybase = Off auto_prepend_file = auto_append_file = default_mimetype = "text/html" doc_root = user_dir = enable_dl = Off cgi.fix_pathinfo=1 file_uploads = On upload_max_filesize = 2M max_file_uploads = 20 allow_url_fopen = On allow_url_include = Off default_socket_timeout = 60 [Date] date.timezone = "America/Chicago" [filter] [iconv] [intl] [sqlite] [sqlite3] [Pcre] [Pdo] [Pdo_mysql] pdo_mysql.cache_size = 2000 pdo_mysql.default_socket= [Phar] [Syslog] define_syslog_variables = Off [mail function] SMTP = localhost smtp_port = 25 mail.add_x_header = On [SQL] sql.safe_mode = Off [ODBC] odbc.allow_persistent = On odbc.check_persistent = On odbc.max_persistent = -1 odbc.max_links = -1 odbc.defaultlrl = 4096 odbc.defaultbinmode = 1 [Interbase] ibase.allow_persistent = 1 ibase.max_persistent = -1 ibase.max_links = -1 ibase.timestampformat = "%Y-%m-%d %H:%M:%S" ibase.dateformat = "%Y-%m-%d" ibase.timeformat = "%H:%M:%S" [MySQL] mysql.allow_local_infile = On mysql.allow_persistent = On mysql.cache_size = 2000 mysql.max_persistent = -1 mysql.max_links = -1 mysql.default_port = mysql.default_socket = mysql.default_host = mysql.default_user = mysql.default_password = mysql.connect_timeout = 60 mysql.trace_mode = Off [MySQLi] mysqli.max_persistent = -1 mysqli.allow_persistent = On mysqli.max_links = -1 mysqli.cache_size = 2000 mysqli.default_port = 3306 mysqli.default_socket = mysqli.default_host = mysqli.default_user = mysqli.default_pw = mysqli.reconnect = Off [mysqlnd] mysqlnd.collect_statistics = On mysqlnd.collect_memory_statistics = Off [OCI8] [PostgresSQL] pgsql.allow_persistent = On pgsql.auto_reset_persistent = Off pgsql.max_persistent = -1 pgsql.max_links = -1 pgsql.ignore_notice = 0 pgsql.log_notice = 0 [Sybase-CT] sybct.allow_persistent = On sybct.max_persistent = -1 sybct.max_links = -1 sybct.min_server_severity = 10 sybct.min_client_severity = 10 [bcmath] bcmath.scale = 0 [browscap] [Session] session.save_handler = files session.use_cookies = 1 session.use_only_cookies = 1 session.name = PHPSESSID session.auto_start = 0 session.cookie_lifetime = 0 session.cookie_path = / session.cookie_domain = session.cookie_httponly = session.serialize_handler = php session.gc_probability = 1 session.gc_divisor = 1000 session.gc_maxlifetime = 1440 session.bug_compat_42 = Off session.bug_compat_warn = Off session.referer_check = session.entropy_length = 0 session.cache_limiter = nocache session.cache_expire = 180 session.use_trans_sid = 0 session.hash_function = 0 session.hash_bits_per_character = 5 url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=fakeentry" [MSSQL] mssql.allow_persistent = On mssql.max_persistent = -1 mssql.max_links = -1 mssql.min_error_severity = 10 mssql.min_message_severity = 10 mssql.compatability_mode = Off mssql.secure_connection = Off [Assertion] [COM] [mbstring] [gd] [exif] [Tidy] tidy.clean_output = Off [soap] soap.wsdl_cache_enabled=1 soap.wsdl_cache_dir="/tmp" soap.wsdl_cache_ttl=86400 soap.wsdl_cache_limit = 5 [sysvshm] [ldap] ldap.max_links = -1 [mcrypt] [dba] Update: here is /etc/lighttpd/conf-enabled/15-fastcgi-php.conf As far as I know, it's just the default config file the Ubuntu package installed. ## FastCGI programs have the same functionality as CGI programs, ## but are considerably faster through lower interpreter startup ## time and socketed communication ## ## Documentation: /usr/share/doc/lighttpd-doc/fastcgi.txt.gz ## http://redmine.lighttpd.net/projects/lighttpd/wiki/Docs:ConfigurationOptions#mod_fastcgi-fastcgi ## Start an FastCGI server for php (needs the php5-cgi package) fastcgi.server += ( ".php" => (( "bin-path" => "/usr/bin/php-cgi", "socket" => "/tmp/php.socket", "max-procs" => 1, "idle-timeout" => 20, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "4", "PHP_FCGI_MAX_REQUESTS" => "10000" ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" )) )

    Read the article

< Previous Page | 6 7 8 9 10 11  | Next Page >