Search Results

Search found 15172 results on 607 pages for 'array intersect'.

Page 281/607 | < Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >

  • Please help! Every Post link links to the most recent post Wordpress

    - by kwek-kwek
    I got the site up on time, with one blog post up. Later I added another one and tested it. Big problem! Any link that used to take you to the old post (ie: side-bar "Recent Posts" links) now takes you to the newest one. I tested it by adding a third post, and got the same result. This is a custom wordpress theme and I have a, page.php <?php get_header(); ?> <?php if (have_posts()) : while (have_posts()) : the_post(); ?> <div id="BodyWrap"> <!--MAIN CONT--> <div id="mainCont"> <?php get_sidebar(); ?> <?php if (is_page(array('home'))) { ;?> <div id="rotateBanner"> <div id="slide-holder"> <div id="slide-runner"> <img id="slide-img-1" src="<?php bloginfo('template_url'); ?>/images/banner/testimonial2.jpg" class="slide" alt="" /> <img id="slide-img-5" src="<?php bloginfo('template_url'); ?>/images/banner/testimonial5.jpg" class="slide" alt="" /> <img id="slide-img-2" src="<?php bloginfo('template_url'); ?>/images/banner/testimonial1.jpg" class="slide" alt="" /> <img id="slide-img-6" src="<?php bloginfo('template_url'); ?>/images/banner/testimonial6.jpg" class="slide" alt="" /> <img id="slide-img-3" src="<?php bloginfo('template_url'); ?>/images/banner/testimonial3.jpg" class="slide" alt="" /> <img id="slide-img-7" src="<?php bloginfo('template_url'); ?>/images/banner/testimonial7.jpg" class="slide" alt="" /> <img id="slide-img-4" src="<?php bloginfo('template_url'); ?>/images/banner/testimonial4.jpg" class="slide" alt="" /> <img id="slide-img-8" src="<?php bloginfo('template_url'); ?>/images/banner/testimonial8.jpg" class="slide" alt="" /> <div id="slide-controls"> <p id="slide-client" class="text" style="display:none;"><span></span></p> <p id="slide-desc" class="text" style="display:none;"></p> <p id="slide-nav" style="display:none;"></p> </div> </div> <script type="text/javascript"> if(!window.slider) var slider={};slider.data=[{"id":"slide-img-1","client":"nature beauty","desc":"nature beauty photography"},{"id":"slide-img-5","client":"nature beauty","desc":"nature beauty photography"},{"id":"slide-img-2","client":"nature beauty","desc":"nature beauty photography"},{"id":"slide-img-6","client":"nature beauty","desc":"nature beauty photography"},{"id":"slide-img-3","client":"nature beauty","desc":"nature beauty photography"},{"id":"slide-img-7","client":"nature beauty","desc":"nature beauty photography"},{"id":"slide-img-4","client":"nature beauty","desc":"nature beauty photography"},{"id":"slide-img-8","client":"nature beauty","desc":"nature beauty photography"}]; </script> </div> </div> <?php } ?> <?php if (is_page(array('accueil'))) { ;?> <div id="rotateBanner"> <div id="slide-holder"> <div id="slide-runner"> <img id="slide-img-1" src="<?php bloginfo('template_url'); ?>/images/banner/testimonial1-fr.jpg" class="slide" alt="" /> <img id="slide-img-5" src="<?php bloginfo('template_url'); ?>/images/banner/testimonial5-fr.jpg" class="slide" alt="" /> <img id="slide-img-2" src="<?php bloginfo('template_url'); ?>/images/banner/testimonial2-fr.jpg" class="slide" alt="" /> <img id="slide-img-6" src="<?php bloginfo('template_url'); ?>/images/banner/testimonial6-fr.jpg" class="slide" alt="" /> <img id="slide-img-3" src="<?php bloginfo('template_url'); ?>/images/banner/testimonial3-fr.jpg" class="slide" alt="" /> <img id="slide-img-7" src="<?php bloginfo('template_url'); ?>/images/banner/testimonial7-fr.jpg" class="slide" alt="" /> <img id="slide-img-4" src="<?php bloginfo('template_url'); ?>/images/banner/testimonial4-fr.jpg" class="slide" alt="" /> <img id="slide-img-8" src="<?php bloginfo('template_url'); ?>/images/banner/testimonial8-fr.jpg" class="slide" alt="" /> <div id="slide-controls"> <p id="slide-client" class="text" style="display:none;"><span></span></p> <p id="slide-desc" class="text" style="display:none;"></p> <p id="slide-nav" style="display:none;"></p> </div> </div> <script type="text/javascript"> if(!window.slider) var slider={};slider.data=[{"id":"slide-img-1","client":"nature beauty","desc":"nature beauty photography"},{"id":"slide-img-5","client":"nature beauty","desc":"nature beauty photography"},{"id":"slide-img-2","client":"nature beauty","desc":"nature beauty photography"},{"id":"slide-img-6","client":"nature beauty","desc":"nature beauty photography"},{"id":"slide-img-3","client":"nature beauty","desc":"nature beauty photography"},{"id":"slide-img-7","client":"nature beauty","desc":"nature beauty photography"},{"id":"slide-img-4","client":"nature beauty","desc":"nature beauty photography"},{"id":"slide-img-8","client":"nature beauty","desc":"nature beauty photography"}]; </script> </div> </div> <?php } ?> <?php if (is_page(array('contact-us'))) { ;?> <div id="rotateBanner"> <?php custom_field_image() ?> </div> <?php } ?> <div id="mainCopy"> <div id="content"> <h2> <?php if (is_page('home','accueil')) : ?> <?php else : ?> <?php single_post_title(); ?> <?php endif; ?></h2> <?php the_content('<p class="serif">Read the rest of this page &raquo;</p>'); ?> <?php wp_link_pages(array('before' => '<p><strong>Pages:</strong> ', 'after' => '</p>', 'next_or_number' => 'number')); ?> </div> </div> <?php if (is_page(array('home','accueil'))) { ;?> <div id="rightCol2"> <div id="Fworks"> <h2>Featured work</h2> <li><img src="<?php bloginfo('template_url'); ?>/images/portage-thumb.jpg" width="234" height="92" border="0" alt="" /></li> <li><a href="<?php bloginfo('url'); ?>our-work/foundation-on-antivirals"><img src="<?php bloginfo('template_url'); ?>/images/fav-thumb.jpg" width="234" height="92" border="0" alt="" /></a></li> <li><img src="<?php bloginfo('template_url'); ?>/images/danslejardin-thumb.jpg" width="234" height="92" border="0" alt="" /></li> </div> <div id="NewEvents"> <?php if ( (strtolower(ICL_LANGUAGE_CODE) == 'en') ) {echo("<h2>News &amp; Events</h2");} ?> <?php if ( (strtolower(ICL_LANGUAGE_CODE) == 'fr')) echo("<h2>Nouvelles</h2") ?> <div id="NewsListings"> <ul> <?php //dbem_get_events_list("limit=5&scope=al&order=DESC"); ?> <?php include('events.php');?> </ul> </div> </div> </div> <?php } ?> </div> </div> <?php endwhile; endif; ?> <?php get_footer(); ?> single.php <?php /** * @package WordPress * @subpackage Default_Theme */ get_header(); ?> <div id="BodyWrap"> <!--MAIN CONT--> <div id="mainCont"> <?php get_sidebar(); ?> <?php if (is_page(array('home','contact-us'))) { ;?> <div id="rotateBanner"> <?php custom_field_image() ?> </div> <?php } ?> <div id="mainCopy"> <div id="content" class="widecolumn" role="main"> <?php if (have_posts()) : while (have_posts()) : the_post(); ?> <!-- <div class="navigation"> <div class="alignleft"><?php previous_post_link('&laquo; %link') ?></div> <div class="alignright"><?php next_post_link('%link &raquo;') ?></div> </div> <br class="clr" />--> <div <?php post_class() ?> id="post-<?php the_ID(); ?>"> <h2><?php the_title(); ?></h2> <div class="entry"> <?php the_content('<p class="serif">Read the rest of this entry &raquo;</p>'); ?> <?php wp_link_pages(array('before' => '<p><strong>Pages:</strong> ', 'after' => '</p>', 'next_or_number' => 'number')); ?> <?php the_tags( '<p>Tags: ', ', ', '</p>'); ?> <!--<p class="postmetadata alt"> <small> This entry was posted <?php /* This is commented, because it requires a little adjusting sometimes. You'll need to download this plugin, and follow the instructions: http://binarybonsai.com/wordpress/time-since/ */ /* $entry_datetime = abs(strtotime($post->post_date) - (60*120)); echo time_since($entry_datetime); echo ' ago'; */ ?> on <?php the_time('l, F jS, Y') ?> at <?php the_time() ?> and is filed under <?php the_category(', ') ?>. You can follow any responses to this entry through the <?php post_comments_feed_link('RSS 2.0'); ?> feed. <?php if ( comments_open() && pings_open() ) { // Both Comments and Pings are open ?> You can <a href="#respond">leave a response</a>, or <a href="<?php trackback_url(); ?>" rel="trackback">trackback</a> from your own site. <?php } elseif ( !comments_open() && pings_open() ) { // Only Pings are Open ?> Responses are currently closed, but you can <a href="<?php trackback_url(); ?> " rel="trackback">trackback</a> from your own site. <?php } elseif ( comments_open() && !pings_open() ) { // Comments are open, Pings are not ?> You can skip to the end and leave a response. Pinging is currently not allowed. <?php } elseif ( !comments_open() && !pings_open() ) { // Neither Comments, nor Pings are open ?> Both comments and pings are currently closed. <?php } edit_post_link('Edit this entry','','.'); ?> </small> </p>--> </div> </div> <?php comments_template(); ?> <?php endwhile; else: ?> <p>Sorry, no posts matched your criteria.</p> <?php endif; ?> </div> </div> </div> </div> <?php get_footer(); ?> index.php <?php get_header(); ?> <!--MAIN WRAP--> <div id="BodyWrap"> <!--MAIN CONT--> <div id="mainCont"> <?php get_sidebar(); ?> <div id="mainCopy"> <div id="content"> <?php if ( have_posts() ) : while ( have_posts() ) : the_post(); ?> <div id="BGHeadTitle"><h2><a href="<?php the_permalink() ?>"><?php the_title(); ?></a></h2></div> <?php the_content(); ?> <p><?php the_time('F j, Y'); ?> at <?php the_time('g:i a'); ?> | <?php the_category(', '); ?> | <?php comments_number('No comment', '1 comment', '% comments'); ?></p> <?php comments_template(); // Get wp-comments.php template ?> <?php endwhile; else: ?> <h2>Woops...</h2> <p>Sorry, no posts we're found.</p> <?php endif; ?> <p align="center"><?php posts_nav_link(); ?></p> </div> </div> </div> </div> <?php get_footer(); ?> my recent post code : <ul> <?php query_posts('cat=3,4,5&posts_per_page=5&order=ASC&orderby=date'); if ( have_posts() ) : while ( have_posts() ) : the_post()?> <li> <span class="date"><?php the_time('M j') ?></span> <a href="<?php the_permalink(); ?>" title="<?php the_title(); ?>"><?php the_title(); ?></a> </li> <?php endwhile; ?> <?php rewind_posts(); ?> </ul> I am really stuck the site went live and when I was working on the testserver I only noticed it.view the site here »

    Read the article

  • How to debug memcached "SERVER HAS FAILED AND IS DISABLED UNTIL TIMED RETRY" errors?

    - by Jevgenij Evll
    I have a two server memcached setup. When memcached write fails, I receive an email notification. About once per day "SERVER HAS FAILED AND IS DISABLED UNTIL TIMED RETRY" error comes and I have no idea how to find the reason. I am using PHP Memcached client. I am not using too long keys. I tried adding -v flag, but it does not help, the log remains empty. If I include output of getStats to the error notification, I receive the following info: Array ( [192.168.0.3:11211] => Array ( [pid] => 28167 [uptime] => 3671962 [threads] => 4 [time] => 1358714713 [pointer_size] => 64 [rusage_user_seconds] => 24516 [rusage_user_microseconds] => 130981 [rusage_system_seconds] => 86246 [rusage_system_microseconds] => 675512 [curr_items] => 1616352 [total_items] => 118339822 [limit_maxbytes] => 2684354560 [curr_connections] => 8 [total_connections] => 78108681 [connection_structures] => 356 [bytes] => 981522779 [cmd_get] => 1561752945 [cmd_set] => 158718324 [get_hits] => 1383072575 [get_misses] => 178680370 [evictions] => 0 [bytes_read] => 138113231690 [bytes_written] => 1091741700765 [version] => 1.4.15 ) [192.168.0.4:11211] => Array ( [pid] => -1 [uptime] => 0 [threads] => 0 [time] => 0 [pointer_size] => 0 [rusage_user_seconds] => 0 [rusage_user_microseconds] => 0 [rusage_system_seconds] => 0 [rusage_system_microseconds] => 0 [curr_items] => 0 [total_items] => 0 [limit_maxbytes] => 0 [curr_connections] => 0 [total_connections] => 0 [connection_structures] => 0 [bytes] => 0 [cmd_get] => 0 [cmd_set] => 0 [get_hits] => 0 [get_misses] => 0 [evictions] => 0 [bytes_read] => 0 [bytes_written] => 0 [version] => ) )

    Read the article

  • jQuery Templates and Data Linking (and Microsoft contributing to jQuery)

    - by ScottGu
    The jQuery library has a passionate community of developers, and it is now the most widely used JavaScript library on the web today. Two years ago I announced that Microsoft would begin offering product support for jQuery, and that we’d be including it in new versions of Visual Studio going forward. By default, when you create new ASP.NET Web Forms and ASP.NET MVC projects with VS 2010 you’ll find jQuery automatically added to your project. A few weeks ago during my second keynote at the MIX 2010 conference I announced that Microsoft would also begin contributing to the jQuery project.  During the talk, John Resig -- the creator of the jQuery library and leader of the jQuery developer team – talked a little about our participation and discussed an early prototype of a new client templating API for jQuery. In this blog post, I’m going to talk a little about how my team is starting to contribute to the jQuery project, and discuss some of the specific features that we are working on such as client-side templating and data linking (data-binding). Contributing to jQuery jQuery has a fantastic developer community, and a very open way to propose suggestions and make contributions.  Microsoft is following the same process to contribute to jQuery as any other member of the community. As an example, when working with the jQuery community to improve support for templating to jQuery my team followed the following steps: We created a proposal for templating and posted the proposal to the jQuery developer forum (http://forum.jquery.com/topic/jquery-templates-proposal and http://forum.jquery.com/topic/templating-syntax ). After receiving feedback on the forums, the jQuery team created a prototype for templating and posted the prototype at the Github code repository (http://github.com/jquery/jquery-tmpl ). We iterated on the prototype, creating a new fork on Github of the templating prototype, to suggest design improvements. Several other members of the community also provided design feedback by forking the templating code. There has been an amazing amount of participation by the jQuery community in response to the original templating proposal (over 100 posts in the jQuery forum), and the design of the templating proposal has evolved significantly based on community feedback. The jQuery team is the ultimate determiner on what happens with the templating proposal – they might include it in jQuery core, or make it an official plugin, or reject it entirely.  My team is excited to be able to participate in the open source process, and make suggestions and contributions the same way as any other member of the community. jQuery Template Support Client-side templates enable jQuery developers to easily generate and render HTML UI on the client.  Templates support a simple syntax that enables either developers or designers to declaratively specify the HTML they want to generate.  Developers can then programmatically invoke the templates on the client, and pass JavaScript objects to them to make the content rendered completely data driven.  These JavaScript objects can optionally be based on data retrieved from a server. Because the jQuery templating proposal is still evolving in response to community feedback, the final version might look very different than the version below. This blog post gives you a sense of how you can try out and use templating as it exists today (you can download the prototype by the jQuery core team at http://github.com/jquery/jquery-tmpl or the latest submission from my team at http://github.com/nje/jquery-tmpl).  jQuery Client Templates You create client-side jQuery templates by embedding content within a <script type="text/html"> tag.  For example, the HTML below contains a <div> template container, as well as a client-side jQuery “contactTemplate” template (within the <script type="text/html"> element) that can be used to dynamically display a list of contacts: The {{= name }} and {{= phone }} expressions are used within the contact template above to display the names and phone numbers of “contact” objects passed to the template. We can use the template to display either an array of JavaScript objects or a single object. The JavaScript code below demonstrates how you can render a JavaScript array of “contact” object using the above template. The render() method renders the data into a string and appends the string to the “contactContainer” DIV element: When the page is loaded, the list of contacts is rendered by the template.  All of this template rendering is happening on the client-side within the browser:   Templating Commands and Conditional Display Logic The current templating proposal supports a small set of template commands - including if, else, and each statements. The number of template commands was deliberately kept small to encourage people to place more complicated logic outside of their templates. Even this small set of template commands is very useful though. Imagine, for example, that each contact can have zero or more phone numbers. The contacts could be represented by the JavaScript array below: The template below demonstrates how you can use the if and each template commands to conditionally display and loop the phone numbers for each contact: If a contact has one or more phone numbers then each of the phone numbers is displayed by iterating through the phone numbers with the each template command: The jQuery team designed the template commands so that they are extensible. If you have a need for a new template command then you can easily add new template commands to the default set of commands. Support for Client Data-Linking The ASP.NET team recently submitted another proposal and prototype to the jQuery forums (http://forum.jquery.com/topic/proposal-for-adding-data-linking-to-jquery). This proposal describes a new feature named data linking. Data Linking enables you to link a property of one object to a property of another object - so that when one property changes the other property changes.  Data linking enables you to easily keep your UI and data objects synchronized within a page. If you are familiar with the concept of data-binding then you will be familiar with data linking (in the proposal, we call the feature data linking because jQuery already includes a bind() method that has nothing to do with data-binding). Imagine, for example, that you have a page with the following HTML <input> elements: The following JavaScript code links the two INPUT elements above to the properties of a JavaScript “contact” object that has a “name” and “phone” property: When you execute this code, the value of the first INPUT element (#name) is set to the value of the contact name property, and the value of the second INPUT element (#phone) is set to the value of the contact phone property. The properties of the contact object and the properties of the INPUT elements are also linked – so that changes to one are also reflected in the other. Because the contact object is linked to the INPUT element, when you request the page, the values of the contact properties are displayed: More interesting, the values of the linked INPUT elements will change automatically whenever you update the properties of the contact object they are linked to. For example, we could programmatically modify the properties of the “contact” object using the jQuery attr() method like below: Because our two INPUT elements are linked to the “contact” object, the INPUT element values will be updated automatically (without us having to write any code to modify the UI elements): Note that we updated the contact object above using the jQuery attr() method. In order for data linking to work, you must use jQuery methods to modify the property values. Two Way Linking The linkBoth() method enables two-way data linking. The contact object and INPUT elements are linked in both directions. When you modify the value of the INPUT element, the contact object is also updated automatically. For example, the following code adds a client-side JavaScript click handler to an HTML button element. When you click the button, the property values of the contact object are displayed using an alert() dialog: The following demonstrates what happens when you change the value of the Name INPUT element and click the Save button. Notice that the name property of the “contact” object that the INPUT element was linked to was updated automatically: The above example is obviously trivially simple.  Instead of displaying the new values of the contact object with a JavaScript alert, you can imagine instead calling a web-service to save the object to a database. The benefit of data linking is that it enables you to focus on your data and frees you from the mechanics of keeping your UI and data in sync. Converters The current data linking proposal also supports a feature called converters. A converter enables you to easily convert the value of a property during data linking. For example, imagine that you want to represent phone numbers in a standard way with the “contact” object phone property. In particular, you don’t want to include special characters such as ()- in the phone number - instead you only want digits and nothing else. In that case, you can wire-up a converter to convert the value of an INPUT element into this format using the code below: Notice above how a converter function is being passed to the linkFrom() method used to link the phone property of the “contact” object with the value of the phone INPUT element. This convertor function strips any non-numeric characters from the INPUT element before updating the phone property.  Now, if you enter the phone number (206) 555-9999 into the phone input field then the value 2065559999 is assigned to the phone property of the contact object: You can also use a converter in the opposite direction also. For example, you can apply a standard phone format string when displaying a phone number from a phone property. Combining Templating and Data Linking Our goal in submitting these two proposals for templating and data linking is to make it easier to work with data when building websites and applications with jQuery. Templating makes it easier to display a list of database records retrieved from a database through an Ajax call. Data linking makes it easier to keep the data and user interface in sync for update scenarios. Currently, we are working on an extension of the data linking proposal to support declarative data linking. We want to make it easy to take advantage of data linking when using a template to display data. For example, imagine that you are using the following template to display an array of product objects: Notice the {{link name}} and {{link price}} expressions. These expressions enable declarative data linking between the SPAN elements and properties of the product objects. The current jQuery templating prototype supports extending its syntax with custom template commands. In this case, we are extending the default templating syntax with a custom template command named “link”. The benefit of using data linking with the above template is that the SPAN elements will be automatically updated whenever the underlying “product” data is updated.  Declarative data linking also makes it easier to create edit and insert forms. For example, you could create a form for editing a product by using declarative data linking like this: Whenever you change the value of the INPUT elements in a template that uses declarative data linking, the underlying JavaScript data object is automatically updated. Instead of needing to write code to scrape the HTML form to get updated values, you can instead work with the underlying data directly – making your client-side code much cleaner and simpler. Downloading Working Code Examples of the Above Scenarios You can download this .zip file to get with working code examples of the above scenarios.  The .zip file includes 4 static HTML page: Listing1_Templating.htm – Illustrates basic templating. Listing2_TemplatingConditionals.htm – Illustrates templating with the use of the if and each template commands. Listing3_DataLinking.htm – Illustrates data linking. Listing4_Converters.htm – Illustrates using a converter with data linking. You can un-zip the file to the file-system and then run each page to see the concepts in action. Summary We are excited to be able to begin participating within the open-source jQuery project.  We’ve received lots of encouraging feedback in response to our first two proposals, and we will continue to actively contribute going forward.  These features will hopefully make it easier for all developers (including ASP.NET developers) to build great Ajax applications. Hope this helps, Scott P.S. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu]

    Read the article

  • Z axis in isometric tilemap

    - by gyhgowvi
    I'm experimenting with isometric tile maps in JavaScript and HTML5 Canvas. I'm storing tile map data in JavaScript 2D array. // 0 - Grass // 1 - Dirt // ... var mapData = [ [0, 0, 0, 0, 0], [0, 0, 1, 0, 0, ... ] and draw for(var i = 0; i < mapData.length; i++) { for(var j = 0; j < mapData[i].length; j++) { var p = iso2screen(i, j, 0); // X, Y, Z context.drawImage(tileArray[mapData[i][j]], p.x, p.y); } } but this function mean's all tile Z axis is equal to zero. var p = iso2screen(i, j, 0); Maybe anyone have idea and how to do something like mapData[0][0] Z axis equal to 3 mapData[5][5] Z axis equal to 5? I have idea: Write function for grass, dirt and store this function to 2D array and draw and later mapData[0][0].setZ(3); But it is good idea to write functions for each tiles?

    Read the article

  • Install MegaCli to Monitor Perc 5/i in Nexentastor 3

    - by Peter Valadez
    I have a Dell 2950 with a Perc 5/i Raid controller that we've already installed Nexentastor 3 Community Edition on. We setup a raid-10 array that and put a ZFS pool on top of the hardware. As I understand, in this configuration ZFS/Nexentastor will not be able to tell when a disk fails in the array. Obviously, this is not optimal. Since the Dell Perc 5/i controller is a rebranded LSI controller, you should be able to use the MegaCli utility to manage the array and monitor its condition. I had seen in a separate forum that the Perc 5/i is very similar to the LSI MegaRAID 8480E, so I tried installing the MegaCli utility at the link below. However, I have not been able to successfully install the utility. http://www.lsi.com/support/products/Pages/MegaRAIDSAS8480E.aspx Here is what happened when I tried to install MegaCli: root@Nexenta2:/files# pkgadd -d MegaCli.pkg Warning: unable to relocate '$BASEDIR' mv: cannot move `solmegacli-8.02.16/' to a subdirectory of itself, `solmegacli-8.02.16//var/lib/dpkg/alien/solmegacli/reloc/solmegacli-8.02.16' mv: cannot move `solmegacli-8.02.16/' to a subdirectory of itself, `solmegacli-8.02.16//opt/solmegacli-8.02.16' 822-date: warning: This program is deprecated. Please use 'date -R' instead. 822-date: warning: This program is deprecated. Please use 'date -R' instead. solmegacli_8.02.16-1_all.deb generated (Reading database ... 41397 files and directories currently installed.) Preparing to replace solmegacli 8.02.16-1 (using solmegacli_8.02.16-1_all.deb) ... Unpacking replacement solmegacli ... Setting up solmegacli (8.02.16-1) ... In /var/logs/dpkg.log: 2012-03-23 20:40:19 status unpacked solmegacli 8.02.16-1 2012-03-23 20:40:19 configure solmegacli 8.02.16-1 8.02.16-1 2012-03-23 20:40:19 status unpacked solmegacli 8.02.16-1 2012-03-23 20:40:19 status half-configured solmegacli 8.02.16-1 2012-03-23 20:40:19 status installed solmegacli 8.02.16-1 So... I've got three questions: Is it possible to install and use MegaCli in Nexentastor 3? If so, how can I install MegaCli on Nexentastor 3? Suggestions welcome!!! If not, is there a better way to monitor the condition of the Perc 5/i hardware raid? Our 2950 does have a DRAC card, so can I use that to monitor the raid condition?

    Read the article

  • Tile Engine - Procedural generation, Data structures, Rendering methods - A lot of effort question!

    - by Trixmix
    Isometric Tile and GameObject rendering. To achive the desired looking game I need to take into consideration which tiles need to be drawn first and which last. What I used is a Object that is TileRenderQueue that you would give it a tile list and it will give you a queue on which ones to draw based on their Z coordinate, so that if the Z is higher then it needs to be drawn last. Now if you read above you would know that I want the location data to instead of being stored in the tile instance i want it to be that the index in the array is the location. and then maybe based on the array i could draw the tiles instead of taking a long time in for looping and ordering them by Z. This is the hardest part for me. It's hard for me to find a simple solution to the which one to draw when problem. Also there is the fact that if the X is larger than the gameobject where the X is larger needs to be drawn over the rest of the tiles and so on. Here is an example: All the parts work together to create an efficient engine so its important to me that you would answer all of the parts. I hope you will work on the answers hard just as much that I worked on this question! If there is any unclear part tell me so in the comments! Thanks for the help!

    Read the article

  • How do I align my partition table properly?

    - by Jorge Castro
    I am in the process of building my first RAID5 array. I've used mdadm to create the following set up: root@bondigas:~# mdadm --detail /dev/md1 /dev/md1: Version : 00.90 Creation Time : Wed Oct 20 20:00:41 2010 Raid Level : raid5 Array Size : 5860543488 (5589.05 GiB 6001.20 GB) Used Dev Size : 1953514496 (1863.02 GiB 2000.40 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Wed Oct 20 20:13:48 2010 State : clean, degraded, recovering Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 64K Rebuild Status : 1% complete UUID : f6dc829e:aa29b476:edd1ef19:85032322 (local to host bondigas) Events : 0.12 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 4 8 64 3 spare rebuilding /dev/sde While that's going I decided to format the beast with the following command: root@bondigas:~# mkfs.ext4 /dev/md1p1 mke2fs 1.41.11 (14-Mar-2010) /dev/md1p1 alignment is offset by 63488 bytes. This may result in very poor performance, (re)-partitioning suggested. Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=16 blocks, Stripe width=48 blocks 97853440 inodes, 391394047 blocks 19569702 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=0 11945 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848 Writing inode tables: ^C 27/11945 root@bondigas:~# ^C I am unsure what to do about "/dev/md1p1 alignment is offset by 63488 bytes." and how to properly partition the disks to match so I can format it properly.

    Read the article

  • PHP Error / Mk-livestatus in Nagvis

    - by tod
    I have Nagios and Nagvis installed via Debian packages, but when I run Nagvis and try to get into the "General Configuration" menu I get this error Error: (0) Array to string conversion (/usr/share/nagvis/share/server/core/classes/WuiViewEditMainCfg.php:126) #0 /usr/share/nagvis/share/server/core/classes/WuiViewEditMainCfg.php(126): nagvisExceptionErrorHandler(8, 'Array to string...', '/usr/share/nagv...', 126, Array) #1 /usr/share/nagvis/share/server/core/classes/WuiViewEditMainCfg.php(44): WuiViewEditMainCfg->getFields() #2 /usr/share/nagvis/share/server/core/classes/CoreModMainCfg.php(56): WuiViewEditMainCfg->parse() #3 /usr/share/nagvis/share/server/core/functions/index.php(120): CoreModMainCfg->handleAction() #4 /usr/share/nagvis/share/server/core/ajax_handler.php(63): require('/usr/share/nagv...') #5 {main} I'm also having an issue with backends in Nagvis. check-mk-livestatus is installed, but I get this error when hovering over items: Problem (backend: live_1): Unable to connect to the /var/lib/nagios3/rw/live in backend live_1: Connection refused Or when trying to add things: Unable to fetch data from backend - falling back to input field. /var/lib/nagios3/rw/ exists, but there is no "live" file. I'm really not sure what is going on, especially since these were all Debian packages... Here is the most relevant part of the nagvis.ini.php: ; ---------------------------- ; Backend definitions ; ---------------------------- ; Example definition of a livestatus backend. ; In this case the backend_id is live_1 ; The path /usr/local/nagios/var/rw has to exist [backend_live_1] backendtype="mklivestatus" ; The status host can be used to prevent annoying timeouts when a backend is not ; reachable. This is only useful in multi backend setups. ; ; It works as follows: The assumption is that there is a "local" backend which ; monitors the host of the "remote" backend. When the remote backend host is ; reported as UP the backend is queried as normal. ; When the remote backend host is reported as "DOWN" or "UNREACHABLE" NagVis won't ; try to connect to the backend anymore until the backend host gets available again. ; ; The statushost needs to be given in the following format: ; "<backend_id>:<hostname>" -> e.g. "live_2:nagios" ;statushost="" socket="unix:/var/lib/nagios3/rw/live" There is nothing relating to 'backends' or 'mklivestatus' in /var/log/nagios3/nagios.log Any help would be much appreciated

    Read the article

  • AIX: iscsi volumes disappear after reboot

    - by Dan
    We have an IBM P505 AIX box, with two internal disks and a defined iSCSI volume. The iSCSI volume is defined in it's own volume group, and is connected to an IBM iSCSI DS3300 disk array via the secondary onboard ethernet port (ie, we're not using a dedicated HBA, we're using the second onboard ethernet port for iSCSI exclusively.) When we reboot the AIX box, the iSCSI volume doesn't get mounted (which is fine; I've figured out that it fails to mount because AIX tries mounting it's volumes before starting the networking stack.) The problem is, after the server has booted it fails to redetect the iSCSI target as a physical disk. This means the volume group (iscsivg) can't go online. if I run cfgmgr -v to redetect the iscsi volume it successfully detects the iscsi target volume and creates a physical volume reference, but allocates it a different volume ID to what was defined before. eg - rootvg contains hdisk 0 and 1 iscsivg was originally defined with hdisk2 as the physical iSCSI volume. after reboot and running cfgmgr -v, AIX detects physical volumes hdisk0, hdisk11 and hdisk3. As there's no hdisk2, I can't varyon the iscsivg volume group. I can't seem any existing hdisk2 definition in the ODM. I can't easily add or change the definition of the physcial disk in the iscsivg volume group as it won't "varyon". Exporting the volume group deletes it completely, recreating the volume group by "importing" it from the reallocated disk makes it available again, but surely there's a better way? Can I force a specific hdisk drive designation for an iscsi target? How do you bring online iSCSI volumes after a reboot? I assume this "just works" with a dedicated HBA instead of a generic ethernet adapter? By the way, the iSCSI volume works fine once it's mounted; we only have problems getting it working - and only with AIX. The iSCSI array works fine with our Linux and Windows servers; ie the volumes get detected and remounted after boot time without any problems, using generic ethernet adapters. Here's some of the config from the AIX box: defined disks / devices: # lsdev hdisk0 Available 06-08-01-5,0 16 Bit LVD SCSI Disk Drive hdisk1 Available 06-08-01-8,0 16 Bit LVD SCSI Disk Drive hdisk3 Available Other iSCSI Disk Drive iscsi0 Available iSCSI Protocol Device scsi0 Available 06-08-00 PCI-X Dual Channel Ultra320 SCSI Adapter bus scsi1 Available 06-08-01 PCI-X Dual Channel Ultra320 SCSI Adapter bus ses0 Available 06-08-01-15,0 SCSI Enclosure Services Device sisscsia0 Available 06-08 PCI-X Dual Channel Ultra320 SCSI Adapter iscsi target definition in /etc/iscsi/targets: # IBM DS3300 disk array # port 1 on second controller 10.10.xx.xxx 3260 iqn.1992-01.com.lsi:1535.600a0b80005b0a7fxxxxxxxxxxxx physical volumes (after reimporting the volume group) # lspv hdisk0 0003b08a0d4936b6 rootvg active hdisk1 0003b08aaa5cb366 rootvg active hdisk3 0003b08a032d04bb iscsivg active

    Read the article

  • Linux boot on a raid1 software raid ?

    - by azera
    Hello I am trying to convert my single disk boot to a raid1 boot So far here is what i have: I sucessfully create the raid 1 as degraded with the new drive alone, I copied all the data on it I can mount that raid 1, see its files etc I already have a raid5 that is working on the same box (although not booting on it) I have installed grub on both drive When grub boot, it loads the kernel alright, but during the kernel boot it fails to load the "root block device" The kernel tells me : 1 - detected that root device is an md device 2 - determining root devices 3 - mounting root 4 - mounting /dev/md125 on /newroot failed: input/output error. Please enter another root device: ... At this point, if I enter /dev/sda3 (my "old" root device that isn't converted to raid yet) everything boots fine without the root. The /dev/md125 device is indeed created but it seems to be created after the error happens, as in it creates it after loading the device, when mdadm is loaded. Somehow it looks like it can't/doesn't load the raid array before it needs to mount it, and I don't know how I can solve that. My config files (taken from the system once it boots with sda3 as root device): $ cat /etc/mdadm.conf ARRAY /dev/md/md0-r5 metadata=0.90 UUID=1a118934:c831bdb3:64188b84:66721085 ARRAY /dev/md125 metadata=0.90 UUID=48ec4190:a80d4dde:64188b84:66721085 $ cat /proc/mdstat Personalities : [raid1] [raid6] [raid5] [raid4] [raid0] [raid10] md125 : active raid1 sdc3[1] 477853312 blocks [2/1] [_U] md127 : active raid5 sdd[0] sdf[3] sdb[2] sde[1] 4395415488 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU] unused devices: <none> $ cat /boot/grub/menu.lst default 0 timeout 8 splashimage=(hd0,0)/boot/grub/splash.xpm.gz title Gentoo Linux 2.6.31-r10 root (hd0,0) #kernel /boot/kernel-genkernel-x86_64-2.6.31-gentoo-r10 root=/dev/ram0 real_root=/dev/sda3 kernel /boot/kernel-genkernel-x86_64-2.6.31-gentoo-r10 root=/dev/md125 md=125,/dev/sdc3,/dev/sda3 initrd /boot/initramfs-genkernel-x86_64-2.6.31-gentoo-r10 # blkid /dev/sda1: UUID="89fee223-b845-4e0a-8a0b-e6cf695d5bcf" TYPE="ext2" /dev/sda2: UUID="a72296a8-d7d4-447f-a34b-ee920fd1a767" TYPE="swap" /dev/sda3: UUID="97eb0a6a-c385-4a9d-bf74-c0bab1fa4dc1" TYPE="ext3" /dev/sdb: UUID="1a118934-c831-bdb3-6418-8b8466721085" TYPE="linux_raid_member" /dev/sdc1: UUID="d36537fd-19a0-b8a3-6418-8b8466721085" TYPE="linux_raid_member" /dev/sdd: UUID="1a118934-c831-bdb3-6418-8b8466721085" TYPE="linux_raid_member" /dev/sde: UUID="1a118934-c831-bdb3-6418-8b8466721085" TYPE="linux_raid_member" /dev/md127: UUID="13a41589-4cf1-4c04-91ca-37484182c783" TYPE="ext4" /dev/sdf: UUID="1a118934-c831-bdb3-6418-8b8466721085" TYPE="linux_raid_member" /dev/sdc2: UUID="a1916397-1b48-45d7-9f98-73aa521e882f" TYPE="swap" /dev/sdc3: UUID="48ec4190-a80d-4dde-6418-8b8466721085" TYPE="linux_raid_member" /dev/md125: UUID="c947ed64-1d4d-4d1d-b4d2-24669fff916e" SEC_TYPE="ext2" TYPE="ext3" # mdadm -E mdadm: No devices to examine # fdisk -l Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xe975e9fc Device Boot Start End Blocks Id System /dev/sda1 1 5 40131 83 Linux /dev/sda2 6 1311 10490445 82 Linux swap / Solaris /dev/sda3 1312 60801 477853425 83 Linux Disk /dev/sdc: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xe975e9fc Device Boot Start End Blocks Id System /dev/sdc1 1 5 40131 83 Linux /dev/sdc2 6 1311 10490445 82 Linux swap / Solaris /dev/sdc3 1312 60801 477853425 83 Linux Disk /dev/md125: 489.3 GB, 489321791488 bytes 2 heads, 4 sectors/track, 119463328 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk identifier: 0x00000000 Disk /dev/md125 doesn't contain a valid partition table

    Read the article

  • Must partprobe before using drive?

    - by Jeff Welling
    This is a followup question to Cannot mount /dev/sdc1 on Debian 5.0, special device /dev/sdc1 doesn't exist Basically, I have 6 SATA hard drives in a machine and I'm trying to create a RAID6 array with them. When I try to run the mdadm command to create (with the verbose option) a raid array, I see messages like "mdadm: super1.x cannot open /dev/sdf1: No such device or address" which are resolved by doing partprobe /dev/sdf and then re-running the mdadm command. The problem is that I have to run partprobe after each reboot, and from experience I don't think this is normal behaviour -- on no other linux machine do I have to partprobe the device before I can use it. Something must be going wrong, but how do I troubleshoot this to find out what? Could this be caused by a hardware problem? Edit: Additional note - before I seemed to only have this problem with one drive, but now I'm having it with 3 drives.

    Read the article

  • Disable comments / Spam protection

    - by SamIAm
    My client site is built in Silverstripe, there is a news page, and it allows people to leave comments. Unfortunately we've got loads of spam emails. I'm new to this, is there any way we can disable the comment field by default? How do I do it? Alternatively is there easy way for me to install a spam protection? Thanks heaps. Sam Update - Because this is someone else's code, I just realised that they have some sort of spam protection already, so we are trying to disable comments now. I have manage to set no comment as default by changing file BlogEntry.php static $defaults = array( "ProvideComments" => true, 'ShowInMenus' => false ); to static $defaults = array( "ProvideComments" => false, //changed 'ShowInMenus' => false ); Am I on the right track to disable comments by default? Also how can I stop on the news page showing xxx comments link? eg Test Posted by Admin on 21 June 2011 | 3 Comments Tags: P This is a test.... 3 comments | Read the full post Thanks. S:)

    Read the article

  • amazon dynamoDB or MySQL for storing large arrays inside each row

    - by Logan Besecker
    I am trying to decide which database I should use for an application I'm making. I was leaning toward dynamoDB because of its scalability, but then I read in the documentation which said: there is a limit of 64 KB on the item size although it looks like MySQL has a similar restriction documented here This application will be storing a lot of data in two arrays, which could contain upwards of 10,000-100,000 strings in each. I estimate that these strings will each be somewhere around 20 characters long, so each element of the array will be around 40bytes and each array could be around 4MB. Given this predicament, what database on amazon AWS would you use; or how would you get around the limit of size per row? Thanks in advance, Logan Besecker

    Read the article

  • LSI RAID-on-chip with RAID6 over two SAS links goes red when HDD enclosure is powered cycled; how to recover?

    - by GregC
    I have a RAID6 array managed by LSI 9286-8e card. I also have Sans Digital 24-bay NexentaSTOR JBOD enclosure with SAS extender built-in. They are connected to separate UPS devices. Normally, I'd shut down the PC, leaving RAID6 in healthy state. But today the power to JBOD enclosure was cut but PC kept running. After restarting the PC, all disks in RAID6 have lit up RED, and the only option in LSI MegaRAID manager app was to reset each disk to unassigned, thereby loosing all data on RAID6 array. Thankfully, I am only testing, but how would I recover if this were to happen in production?

    Read the article

  • Using Table-Valued Parameters With SQL Server Reporting Services

    - by Jesse
    In my last post I talked about using table-valued parameters to pass a list of integer values to a stored procedure without resorting to using comma-delimited strings and parsing out each value into a TABLE variable. In this post I’ll extend the “Customer Transaction Summary” report example to see how we might leverage this same stored procedure from within an SQL Server Reporting Services (SSRS) report. I’ve worked with SSRS off and on for the past several years and have generally found it to be a very useful tool for building nice-looking reports for end users quickly and easily. That said, I’ve been frustrated by SSRS from time to time when seemingly simple things are difficult to accomplish or simply not supported at all. I thought that using table-valued parameters from within a SSRS report would be simple, but unfortunately I was wrong. Customer Transaction Summary Example Let’s take the “Customer Transaction Summary” report example from the last post and try to plug that same stored procedure into an SSRS report. Our report will have three parameters: Start Date – beginning of the date range for which the report will summarize customer transactions End Date – end of the date range for which the report will summarize customer transactions Customer Ids – One or more customer Ids representing the customers that will be included in the report The simplest way to get started with this report will be to create a new dataset and point it at our Customer Transaction Summary report stored procedure (note that I’m using SSRS 2012 in the screenshots below, but there should be little to no difference with SSRS 2008): When you initially create this dataset the SSRS designer will try to invoke the stored procedure to determine what the parameters and output fields are for you automatically. As part of this process the following dialog pops-up: Obviously I can’t use this dialog to specify a value for the ‘@customerIds’ parameter since it is of the IntegerListTableType user-defined type that we created in the last post. Unfortunately this really throws the SSRS designer for a loop, and regardless of what combination of Data Type, Pass Null Value, or Parameter Value I used here, I kept getting this error dialog with the message, "Operand type clash: nvarchar is incompatible with IntegerListTableType". This error message makes some sense considering that the nvarchar type is indeed incompatible with the IntegerListTableType, but there’s little clue given as to how to remedy the situation. I don’t know for sure, but I think that behind-the-scenes the SSRS designer is trying to give the @customerIds parameter an nvarchar-typed SqlParameter which is causing the issue. When I first saw this error I figured that this might just be a limitation of the dataset designer and that I’d be able to work around the issue by manually defining the parameters. I know that there are some special steps that need to be taken when invoking a stored procedure with a table-valued parameter from ADO .NET, so I figured that I might be able to use some custom code embedded in the report  to create a SqlParameter instance with the needed properties and value to make this work, but the “Operand type clash" error message persisted. The Text Query Approach Just because we’re using a stored procedure to create the dataset for this report doesn’t mean that we can’t use the ‘Text’ Query Type option and construct an EXEC statement that will invoke the stored procedure. In order for this to work properly the EXEC statement will also need to declare and populate an IntegerListTableType variable to pass into the stored procedure. Before I go any further I want to make one point clear: this is a really ugly hack and it makes me cringe to do it. Simply put, I strongly feel that it should not be this difficult to use a table-valued parameter with SSRS. With that said, let’s take a look at what we’ll have to do to make this work. Manually Define Parameters First, we’ll need to manually define the parameters for report by right-clicking on the ‘Parameters’ folder in the ‘Report Data’ window. We’ll need to define the ‘@startDate’ and ‘@endDate’ as simple date parameters. We’ll also create a parameter called ‘@customerIds’ that will be a mutli-valued Integer parameter: In the ‘Available Values’ tab we’ll point this parameter at a simple dataset that just returns the CustomerId and CustomerName of each row in the Customers table of the database or manually define a handful of Customer Id values to make available when the report runs. Once we have these parameters properly defined we can take another crack at creating the dataset that will invoke the ‘rpt_CustomerTransactionSummary’ stored procedure. This time we’ll choose the ‘Text’ query type option and put the following into the ‘Query’ text area: 1: exec('declare @customerIdList IntegerListTableType ' + @customerIdInserts + 2: ' EXEC rpt_CustomerTransactionSummary 3: @startDate=''' + @startDate + ''', 4: @endDate='''+ @endDate + ''', 5: @customerIds=@customerIdList')   By using the ‘Text’ query type we can enter any arbitrary SQL that we we want to and then use parameters and string concatenation to inject pieces of that query at run time. It can be a bit tricky to parse this out at first glance, but from the SSRS designer’s point of view this query defines three parameters: @customerIdInserts – This will be a Text parameter that we use to define INSERT statements that will populate the @customerIdList variable that is being declared in the SQL. This parameter won’t actually ever get passed into the stored procedure. I’ll go into how this will work in a bit. @startDate – This is a simple date parameter that will get passed through directly into the @startDate parameter of the stored procedure on line 3. @endDate – This is another simple data parameter that will get passed through into the @endDate parameter of the stored procedure on line 4. At this point the dataset designer will be able to correctly parse the query and should even be able to detect the fields that the stored procedure will return without needing to specify any values for query when prompted to. Once the dataset has been correctly defined we’ll have a @customerIdInserts parameter listed in the ‘Parameters’ tab of the dataset designer. We need to define an expression for this parameter that will take the values selected by the user for the ‘@customerIds’ parameter that we defined earlier and convert them into INSERT statements that will populate the @customerIdList variable that we defined in our Text query. In order to do this we’ll need to add some custom code to our report using the ‘Report Properties’ dialog: Any custom code defined in the Report Properties dialog gets embedded into the .rdl of the report itself and (unfortunately) must be written in VB .NET. Note that you can also add references to custom .NET assemblies (which could be written in any language), but that’s outside the scope of this post so we’ll stick with the “quick and dirty” VB .NET approach for now. Here’s the VB .NET code (note that any embedded code that you add here must be defined in a static/shared function, though you can define as many functions as you want): 1: Public Shared Function BuildIntegerListInserts(ByVal variableName As String, ByVal paramValues As Object()) As String 2: Dim insertStatements As New System.Text.StringBuilder() 3: For Each paramValue As Object In paramValues 4: insertStatements.AppendLine(String.Format("INSERT {0} VALUES ({1})", variableName, paramValue)) 5: Next 6: Return insertStatements.ToString() 7: End Function   This method takes a variable name and an array of objects. We use an array of objects here because that is how SSRS will pass us the values that were selected by the user at run-time. The method uses a StringBuilder to construct INSERT statements that will insert each value from the object array into the provided variable name. Once this method has been defined in the custom code for the report we can go back into the dataset designer’s Parameters tab and update the expression for the ‘@customerIdInserts’ parameter by clicking on the button with the “function” symbol that appears to the right of the parameter value. We’ll set the expression to: 1: =Code.BuildIntegerListInserts("@customerIdList ", Parameters!customerIds.Value)   In order to invoke our custom code method we simply need to invoke “Code.<method name>” and pass in any needed parameters. The first parameter needs to match the name of the IntegerListTableType variable that we used in the EXEC statement of our query. The second parameter will come from the Value property of the ‘@customerIds’ parameter (this evaluates to an object array at run time). Finally, we’ll need to edit the properties of the ‘@customerIdInserts’ parameter on the report to mark it as a nullable internal parameter so that users aren’t prompted to provide a value for it when running the report. Limitations And Final Thoughts When I first started looking into the text query approach described above I wondered if there might be an upper limit to the size of the string that can be used to run a report. Obviously, the size of the actual query could increase pretty dramatically if you have a parameter that has a lot of potential values or you need to support several different table-valued parameters in the same query. I tested the example Customer Transaction Summary report with 1000 selected customers without any issue, but your mileage may vary depending on how much data you might need to pass into your query. If you think that the text query hack is a lot of work just to use a table-valued parameter, I agree! I think that it should be a lot easier than this to use a table-valued parameter from within SSRS, but so far I haven’t found a better way. It might be possible to create some custom .NET code that could build the EXEC statement for a given set of parameters automatically, but exploring that will have to wait for another post. For now, unless there’s a really compelling reason or requirement to use table-valued parameters from SSRS reports I would probably stick with the tried and true “join-multi-valued-parameter-to-CSV-and-split-in-the-query” approach for using mutli-valued parameters in a stored procedure.

    Read the article

  • What Raid should I use for Website Static Files / Content

    - by Simon
    I'm building a Web server (IIS7) and would like to know the best practice for storing static content and the uploaded files of website's users (predominantly pictures, but also other documents like pdf's). I will keep the operating System on a Raid 1 array. Where should I be keeping the actual website's pages & files, it's own static content, and that of it's users? Should I be placing this content on a seperate raid array, and if so which type? I was considering using SLC SSD's (Such as the Intel's X25-e) but the following issues came to light. Will the SLC SSD's give any improvement over a 2.5" 15k SAS Drive for this type of content? If I did use SSD's, I'm under the belief I would still need to use Raid for redundancy, yet I've heard Intel X25-e's don't support TRIM. Does this scrap them as a legitimate option?

    Read the article

  • missing files after reassemble of RAID-5

    - by Kris_R
    I had to open my file-server's housing on Sunday to replace a faulty fan. What I didn't see was that one of the sata-cables was not properly connected. The 1st thing I did after a reboot was a check of the RAID status and it showed immediately that one drive is missing. Till this moment the device was not used (however it was mounted, so I'm not 100% sure that system did nothing). I stopped md0 and re-plugged the cable: mdadm --stop /dev/md0 poweroff After another reboot I checked the removed drive: mdadm --examine /dev/sdd1 ... Checksum : 3276bc1d - correct Events : 315782 Layout : left-symmetric Chunk Size : 32K Number Major Minor RaidDevice State this 0 8 49 0 active sync /dev/sdd1 0 0 8 49 0 active sync /dev/sdd1 1 1 8 65 1 active sync /dev/sde1 2 2 8 33 2 active sync /dev/sdc1 3 3 8 17 3 active sync /dev/sdb1 I was a bit surprised that it was shown as active (even if earlier mdadm said, that this device was removed from array) and its checksum was OK. I recreated RAID with: mdadm --assemble /dev/md0 --scan The command mdadm --detail /dev/md0 showed that all drives were running and system was in "clean" state. I mounted the device md0 and then came hic-cup. I wanted to work on one of the last files that I had been using before all the situation and it was not there. In another place I missed actually all files from the directory where I was working. As far as I can see most of the files that are older than a few days are intact but some newer ones are missing. Now the big question: what would be your advice? Is there a way to get these data? I thought about removing the drive that was earlier labeled by mdadm and rebuild array with another empty HDD. I've found that after re-assemble the "broken" drive has another label (changed from sdd to sdb). Can this have influence on rebuilt process? If yes, how to reassemble the array properly? I'm sure the SATA-cables are connected still in the same order to the controller. p.s. Please no advises like "restore from backup". I'm doing back-ups on Sunday's night and this happened in the late afternoon, so backup is not really options for me. p.s.s. I asked this question on Unix&Linux but no answer came up during last two days. I'm getting quite anxious. Sorry for duplicating if any of you reading the other forum.

    Read the article

  • What is a “pretty and proper OO” way for handling sessions and authentication?

    - by asdfqwer
    Is coupling these two concepts a bad approach? As of right now I'm delegating all session handling and whether or not a user desires to logout in my config.inc file. As I was writing my Auth class I started wondering whether or not my Auth class should be taking care of most of the logic in my config.inc. Regardless, I'm sure there's a more elegant way of handling this... Here is what I have in my config.inc (also a large chunk of this code is based on a reply I found on SO except I can't find the source ._.): ini_set('session.name', 'SID'); # session management session_set_cookie_params(24*60*60); // set SID cookie lifetime session_start(); if(isset($_SESSION['LOGOUT']) { session_destroy(); // destroy session data $_SESSION = array(); // destroy session data sanity check setcookie('SID', '', time() - 24*60*60); // destroy session cookie data #header('Location: '.DOCROOT); } elseif(isset($_SESSION['SID_AUTH'])) { // verify user has authenticated if (!isset($_SESSION['SID_CREATED'])) { $_SESSION['SID_CREATED'] = time(); } elseif (time() - $_SESSION['SID_CREATED'] > 6*60*60) { // session started more than 6 hours ago session_regenerate_id(); // reset SID value $_SESSION['SID_CREATED'] = time(); // update creation time } if (isset($_SESSION['SID_MODIFIED']) && (time() - $_SESSION['SID_MODIFIED'] > 12*60*60)) { // last request was more than 12 hours ago session_destroy(); // destroy session data $_SESSION = array(); // destroy session data sanity check setcookie('SID', '', time() - 24*60*60); // destroy session cookie data } $_SESSION['SID_MODIFIED'] = time(); // update last activity time stamp }

    Read the article

  • Is There Anyway to Undo a Quick Initialize from a Perc6/i VD?

    - by Carlo71
    I stupidly fast initialized an existing Raid 5 Array Virtual disk with 10 Virtual Machines after creating a new Raid 5 Array. The VDs switched order on the list of the Perc Controller. My server is a PowerEdge R710 with a Perc 6/i Raid controller running ESXi 5.1. The Bios of the R710 and the Perc 6/i controller are both running the latest firmware. I tried the steps on this article: http://www.caseyfulton.com/dell-perc-6i-fast-initialize-how-to-restore/. However the BartPE just freezes on the Windows XP slapsh page. Does anybody know a full proof method? I have backups of the VMs, however I would like to avoid restoring all of them due to time constraint.

    Read the article

  • What is a good way to store tilemap data?

    - by Stephen Tierney
    I'm developing a 2D platformer with some uni friends. We've based it upon the XNA Platformer Starter Kit which uses .txt files to store the tile map. While this is simple it does not give us enough control and flexibility with level design. Some examples: for multiple layers of content multiple files are required, each object is fixed onto the grid, doesn't allow for rotation of objects, limited number of characters etc. So I'm doing some research into how to store the level data and map file. This concerns only the file system storage of the tile maps, not the data structure to be used by the game while it is running. The tile map is loaded into a 2D array, so this question is about which source to fill the array from. Reasoning for DB: From my perspective I see less redundancy of data using a database to store the tile data. Tiles in the same x,y position with the same characteristics can be reused from level to level. It seems like it would simple enough to write a method to retrieve all the tiles that are used in a particular level from the database. Reasoning for JSON/XML: Visually editable files, changes can be tracked via SVN a lot easier. But there is repeated content. Do either have any drawbacks (load times, access times, memory etc) compared to the other? And what is commonly used in the industry? Currently the file looks like this: .................... .................... .................... .................... .................... .................... .................... .........GGG........ .........###........ .................... ....GGG.......GGG... ....###.......###... .................... .1................X. #################### 1 - Player start point, X - Level Exit, . - Empty space, # - Platform, G - Gem

    Read the article

  • How to set up RAID 1 on Dell PERC S300 With Existing OS Install

    - by Daniel Dugger
    We have a server that is being used in production, but it was not originally meant to. The main thing I want to add to it is a Dell PERC S300 RAID Card to have the main hard drive (Windows Server 2008 R2) mirrored on another hard drive. I can not initialize the disk and wipe the the OS to create the array and then re-install. Is there a way to create the array with a current hard drive, without affecting it, and just mirroring the drive? If that card is not an option, is there a card that would allow that? The server is a Dell PowerEdge T110 II.

    Read the article

  • Look up and string operation; fetch a value based on searching a partial string

    - by Sam
    I have 2 sets of data. One set to be filled up by fetching relevant data from a data array. DATA to be FILLED: Part#1 Part#2 ------ ------- 4021006 3808587 3870480 3083410 3873905 3890030 4002065 3699803 3930218 ARRAY OF DATA: Part#1 Part#2 ------ ------- 4021006;3808587 1 3808587 2 3870480;3083410;4002065 3 3083410 34 3873905 54 3890030 32 4002065;3930218 65 3699803 75 3930218 68 I need to match Part#1 and find Part#2. EXPECTED OUTPUT Part#1 Part#2 ------ ------- 4021006 1 3808587 1;2 3870480 3 3083410 3;4 3873905 54 3890030 32 4002065 3;65 3699803 75 3930218 65;68 Can anyone help.

    Read the article

  • Excel: count number of unique/distinct row in range with condition

    - by Bertvan
    I have a an excel sheet with: in Col A: week numbers in Col B: dates (timesheet entries) I need to know the number of days worked for each week, so I need to the number of unique date entries per week number. I found formula's (both array as non-array) that handle this for a fixed range, but I want to have the results in another column per week number. So, the result of the added dataset below would be (the colon is just for clarity): 14: 2 15: 3 17: 6 20: 2 21: 3 If this is the source data: 14: 4/04/2012 14: 4/04/2012 15: 10/04/2012 15: 10/04/2012 15: 11/04/2012 17: 26/04/2012 17: 26/04/2012 17: 26/04/2012 17: 26/04/2012 17: 27/04/2012 17: 27/04/2012 20: 14/05/2012 20: 14/05/2012 21: 23/05/2012 21: 23/05/2012 21: 25/05/2012

    Read the article

  • BigData and Customer Experience: Happy Together

    - by Isabel F. Peñuelas
    The two big buzzes of the year may lay closer than it appears. Both concepts intersect at various points: BigData and Return of Investment of Marketing Campaigns On a recent post Big Data Is The Future Of Marketing Jeff Dachis explains very clearly how “Big data analytics finally allows marketers to identify, measure, and manage what is positively impacting their Brand”. Regression analysis applied to big data volumes coming from social media will substitute the failed attempts to justify marketing investments on social media in terms of followers and likes, he continues, “the measurement models applied by marketers on TV Campaigns don´t work on social”, we need to study the data with fresh eyes and maybe then we will start understanding and measuring brand engagemet. Social CRM and BigData The real value of Social CRM start by analyzing mass of big data from social media in order of applying social intelligence techniques that allow us to classify new customer niches and communities and define appropriated strategies to contact potential customers. Gartner Says that the Market for Social CRM is on pace to surpass $1 Billion in Revenue by Year-End 2012 but in words of Zach Hofer-Shall, Analyst at Forrester Research “Social customer relationship management is hard” (The Social CRM Arms Race Heats ). To succeed brands need three things: Investing in new social tools, investing in consultancy and investing in infrastructure for massive data storage and analysis. Neither CeX or BigData are easy and cheap wins. But what are the customer benefits of such investments? Big Data and Brand Engagement Time is the most valuable asset of todays consumers: tired of information overload, exhausted by the terabytes of offering, anxious because of not having the same fast multichannel experience with their services’ marketers or preferred goods providers than the one they found on their social media. Yes, I know you have read this before- me too. But is real. The motto of the Customer Experience philosophy of providing a consistent experience through multiple touchpoints that makes the relationship customer/brand easier and valuable finds it basis on understanding customer/s preferences and context for which BigData analysis is another imperative. In summary, I believe that using BigData Analysis in combination with appropriated CeX strategies and technologies is a promising direction for achieving: efficiency and marketing cost-savings; growing the customer base; and increasing customer conversion and retention. In a world: The Direction of Future Marketing.

    Read the article

  • Software RAID 10 on Linux

    - by vpetersson
    For a long time, I've been thinking about switching to RAID 10 on a few servers. Now that Ubuntu 10.04 LTS is live, it's time for an upgrade. The servers I'm using are HP Proliant ML115 (very good value). It has four internal 3.5" slots. I'm currently using one drive for the system and a RAID5 array (software) for the remaining three disks. The problem is that this creates a single-point-of-failure on the boot drive. Hence I'd like to switch to a RAID10 array, as it would give me both better I/O performance and more reliability. The problem is only that good controller cards that supports RAID10 (such as 3Ware) cost almost as much as the server itself. Moreover software-RAID10 does not seem to work very well with Grub. What is your advice? Should I just keep running RAID5? Have anyone been able to successfully install a software RAID10 without boot issues?

    Read the article

< Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >