TODO //jeb next make this more matrix toolsy. maybe even a reason to use it and migrate our data to azure?
# Gotchas
1. Data flow Variables
* First drill into the Data flow block, general Parameters, and declare a variable.
* Then go back out to the Data flow block and click on it, now you should be able to set the declared Parameter(s).
2. Data Flow: Cloud VS On-Prem
* Pipelines -> Asn -> Refresh WMS Data -> Settings
* Cannot use "on-prem" pipelines in Data Flow. Because of this I needed to pull the data from WMS into our system, which takes extra time, additionally I am getting this message: You will be charged # of used DIUs * copy duration * $0.25/DIU-hour. Local currency and separate discounting may apply per subscription type.
* https://stackoverflow.com/questions/56640577/azure-data-factory-data-flow-task-cannot-take-on-prem-as-source
* https://feedback.azure.com/forums/270578-data-factory/suggestions/37472797-data-flows-add-support-to-access-on-premise-sql-a
2.5. Performance Issues with On-Prem Copy
* https://docs.microsoft.com/en-us/azure/data-factory/copy-activity-performance#copy-performance-and-scalability-achievable-using-adf
3. Partitioning
* I believe the actual Azure Data Factory is run in "Partitions", while the Debug / Preview mode always runs on 1 Partition, which can lead to unexpected results when running in the real world.
* I specifically had issues with sorting and numbering rows.
* It appears that my output data set was partitioned (grouped) on the first column (by default, I never set this up, nor did I ever see this in preview mode so it very was unexpected).
* Solution:
* To resolve this issue, what I came up with was to "force" the natural sort order (where in preview mode I did not need any sort).
* I was forced to come up with a way to sort the results back into their "natural order", luckily this "natural order" had a pattern I could take advantage of.
* When that also did NOT help in the actual Data Factory run, I was finally able to have success by using this "Single partition" option on my new sort task.
* Finally, ensure that any logic that depends on the sort order (like row numbering / filtering) takes place after your sort task.

4. The contains function requires a #item expression:
contains($FileNames, #item == filename)
5. Weird errors
1. "store is not defined"
* Apparently, using string[] variables doesn't seem to be supported, even though it is in the selection dropdown, or maybe it requires a weird format?
Finally find an article on it:
* "There seems to be parsing problems having multiple items in the variable each encapsuled in single-quotes, or potentially with the comma separating them. This often causes errors executing the data flow with messages like "store is not defined"
* "DanielPerlovsky-MSFT
While array types are not supported as data flow parameters, passing in a comma-separated string can work"
2. "shaded.databricks.org.apache.hadoop.fs.azure.AzureException: com.microsoft.azure.storage.StorageException: Bad Request"
* Can occur if you have checked "List of Files" on the source but did not provide a pattern.
@{activity('ASN Load Numbers').output.value[0].LoadNumber}
Use parameter like this:
'ASN_' + replace(toString(currentTimestamp()), ':', '') + '_' + $LoadNumber + '_TEST.txt'
# Resources
https://techcommunity.microsoft.com/t5/azure-data-factory/adf-data-flow-expressions-tips-and-tricks/ba-p/1157577
https://docs.microsoft.com/en-us/azure/data-factory/tutorial-data-flow
https://docs.microsoft.com/en-us/azure/data-factory/format-delimited-text
https://docs.microsoft.com/en-us/azure/data-factory/data-flow-derived-column
https://marlonribunal.com/azure-data-factory-data-flow/
https://www.sqlservercentral.com/articles/azure-data-factory-your-first-data-pipeline
https://medium.com/@adilsonbna/using-azure-data-lake-to-copy-data-from-csv-file-to-a-sql-database-712c243db658
Email templates should be Branded and as Personalized as possible!
It it helpful to group all of your prospects into 1 audience for tracking.
We can then break up our audience by groups, segments, and tags
(in Mailchimp).
This means a couple of things for our workflow:
Create 2 Audiences: 1 real audience and 1 test audience made up of your marketing team.
This will provide some safety/separation between your test runs and your live runs.
Additionally, this should provide separation between your test and live statistics.
*Ignore the previous steps and use Tags if you're on the free Mailchimp as it only allows 1 audience.
Every audience imported into Mailchimp should be "Tagged" with
1) the name of the Prospecting tool that it was exported from and
2) the industry (or search term) that was used to create the audience.
Every audience member imported into Mailchimp should belong to a "Segment".
1) Segment A or Segment B (for A/B Testing)
2) In order to be random, I setup Segment A (Last Name A - L) and Segment B (Last Name M - Z)
I also recommend using "Groups" when you import your "Clients" contacts and "Staff" contacts (for testing).
Lead Tracking
This is the process of monitoring responses, following-up, and tracking possible leads within a Customer Relationship Management (CRM) system.
Bitrix can be used to track all external communications, leads, quotes, deals, clients, and more!
It has a built in Sales Funnel and a ton of other awesome business tools.
The prospects and audiences will be managed in the previous steps and systems,
however when we get a positive response from a prospect they should then become a "lead"
and then entered and tracked in the Bitrix CRM system.
Please click on any of the Tool links above to see more information on how we use these amazing tools!
What tools and workflows do you use at your company?
Thanks for reading!
Expectations for a Campaign:
Open Rate / Reply Rate / Funnel
Higher Quantity = More Responses
Personalization = More Responses
EX:
Can you update your blog with my new link?
Skyscraper:
Find top ranked article on a topic.
We want to OUT-RANK this article.
(Ahref or Ubersuggest to get list of their backlinks,
export the links and reach out, "I think my article is better")
Don't reach out to everyone - audience matters.
Make sure your recipients are relevant or you'll waste your
opportunity in case that recipoient evenutally is relevant.
Personalization / be specific
Research + Merge Fields are Key
Due diligence
POSTAGA
- Merge Field
- Hi {First Name}
- Prospecting (linkedin? website?)
blog article on OUTREACH / AUDIENCE / LEAD process
-not specific to a tool but for M.T. training
-the entire process, how to find an email, add to audience,
add to campaign, reach out, add to bitrix, ALL of that.
postaga -> mailchimp -> bitrix
ORRRRR are these 3 separate articles linked to each other
ORRRRR 3 separate artciesl with same subtopic "outreach campaign"
PROVIDE VALUE
OFFER SOMETHING IN YOUR EMAIL
RECIPROCITY
"we have x followers and blah blah and we can help you"
send a "Follow up"
"sequence of emails" 3 - 5
POSTAGA WORKFLOW!
Postaga blog
->
New Campaign -> Resources
->give it a blog post url
->merge fields into templates!?
->resource opportunity search
->click one to preview (do they have a lot of links)
->select all to pitch to them
->analyze links
->this will find people
->you can manually 'prospect' and enter email if it didnt find one
->you want to fill out all the fields for MAIL MERGE
->postaga has a 'deliverability / verified tag'
->go thru list for best email contacts
->you can also find twitter and linkedin
->export data to csv (if we want to use mailchimp)
->or go to next step sending emails in postaga
->pick resources campaign
->did does have follow up sequences and you can set custom rules too
->if going to go out on weekends, go to next work day
->3 follw up email templates to preview and edit
->check the merge fields as much as possible
->specific personalize
->you can create your own merge fields in templates
->preview the merges and see warnings on missing fields
->go back and add fields
->send test if wanted
->https://postaga.com/demo grab a personal demo
Try to do a podcast? Podcast campaign?
prospecting -> audience building -> outreach -> lead tracking
canspam requirements
1. way to opt out
2. mailing address
personalization 'snippets' i guess override emails?
Set up your Merge Fields:
Audience Dashboard -> Settings -> Audience fields and *|MERGE|* tags
You should configure this page so that it matches up with the columns in the audience files that you will be importing.
THe Merge Fields should be in the same order as columns in the imported file(s).
You can also add any useful fields that are present in the imported file(s) and remove unused fields.
Use the Data Type of "Website" if you're adding a field that contains a link (URL).
This screen also ties with Tag names that you will be able to use in your email campaign templates.
If you do the above prior to importing, then when you perform your audience imports (Audience -> Add Contacts -> Merge -> File) the Merge Field mapping will be streamlined / automatic.
Remember to tag your imports with the Tool and Industry used to prospect the audience!
1 audience
use segments, tags, groups to separate
tags = label (matrix tools added) Platform / Industry
groups = preference center (user picks in a form)
segments = data you already have, use to send more targeted marketing
setup a "tag" of matrix tools test users?
and then a segment for us? to send test emails to?
run a test campaign
"PREVIEW SEGMENT"
"INTEGRETATION"
"AUDIENCE AND ADD SEGMENT"
BRAND AND PERSONALIZATION AS MUCH AS POSSIBLE ADD VALUE with PITCH
TAGS = THE INDUSTRY (OR THE LIST IT CAME FROM MAYBE A TAG FOR WHAT PLATFORM)
===========2===========
notes on mailchimp INTRO
audience - marketing crm, audience dashboard, tags/segments/groups (CAN ADD NOTES FROM BLOG HERE)
-- 1 audience with tags/segments/groups -> start with a test tag? OR test audience??
-- audience dashboard setup a test audience.
-- email/ad/postcard?
-- Manage Audience -> Settings -> Audience name and defaults
-- -- Form settings / Email settings / reCAPTCHA
-- -- embedded signup forms for opt-ins for our newsletter / mailing list -> auto add to audience!
-- -- GDPR (missed that)
-- All contacts -> audience. subscribed and unsubscribed
brand - template and studio
-- name, logo, font colors, identity, image, emotional connection
-- email templates
-- -- select a LAYOUT for your template
-- -- template gets all the good brand things
-- -- every [month] create an edit of the template to turn it into a 'campaign'
-- -- content studio - store logos and assets
-- -- -- products - INTEGRATES WITH E-COMM!! COOL!!
-- -- -- giphy and instagram! to pull your images from social media
campaigns - automation features !?
-- each campaign has stats, opens / clicks / revenue
-- automated campaigns
-- website (create a new one) not helpful?
-- create campaign options
-- -- automated emails -> like welcome emails
-- -- other cool options and screenshots?
-- -- multivariate - A/B Testing. COOL!! (test 2 different subject lines see what gets opened)!
insights
-- IMPORT CONTACTS -> checkout options here -> from csv??!!
studio, site???
settings -> domain authentication -> verify and authenticate -> contact@matrix.tools
hidden costs in mailchimp?? billing tab??
integrations -> facebook, ecwid, bitrix, slack !!!
COMING SOON! Hit the Heart Button if you want me to finish this blog next!
semrush, ubersuggest,
SEMrush Summary:
STEP 1
Go to Market Explorer
https://www.semrush.com/market-explorer/overview/?mindbox-click-id=64cb3615-8e6c-4bc3-b987-07ee362c0c8e&utm_source=mindbox&utm_medium=Email&utm_campaign=en_NewWelcome2&utm_content=Welcome
STEP 2
Enter a competitor’s URL.
STEP 3
Choose between Industry competitors (all the websites fighting for the audience with the same interests) and Organic competitors.
STEP 4
Find Leaders in the Growth Quadrant. These websites grow faster and have a larger online presence than other players. What do they do differently in their marketing?
STEP 5
Explore the Leaders’ traffic generation strategies. In our example, sunandski.com focuses on Google Ads more than other competitors. However, their search performance is lower than the market average.
STEP 6
Analyze the key Leader’s online strategy in depth. Go to Traffic Journey in Traffic Analytics.
https://www.semrush.com/analytics/traffic/
Links:
SEMrush Quick Blog
https://www.semrush.com/blog/6-competitor-insights-you-can-get-in-30-minutes-1/?mindbox-click-id=18c895d1-baa7-4b33-a588-d9ce91229055&utm_campaign=en_NewWelcome1&utm_content=Welcome&utm_medium=Email&utm_source=mindbox
SEMrush Complete Blog
https://www.semrush.com/blog/how-to-do-competitor-analysis-in-digital-marketing/?mindbox-click-id=168b6c5d-68ed-42e9-a3ff-4418bf7f1b51&utm_source=mindbox&utm_medium=Email&utm_campaign=en_2020Welcome1&utm_content=Welcome
SEMrush Course
https://www.semrush.com/academy/courses/competitive-analysis-and-keyword-research-course/getting-the-big-picture
Neil Patel (ubersuggest video)
https://www.youtube.com/watch?v=CaPRbcxUGeE&pp=wgIECgIIAQ%3D%3D&feature=push-fr&attr_tag=f0Mp3Nj-PdgAoNHL%3A6
"Business Intelligence" comprises the strategies and technologies used by enterprises for the data analysis of business information.
BI technologies provide historical, current, and predictive views of business operations.
Structured
ETL (Extract, Transform, Load
Database / Data warehouse / Data lake
"Data Analysis" is an approach to data that seeks to find insights and indentify patters or trends (Nuggets of Insight).
COMING SOON! Hit the Heart Button if you want me to finish this blog next!
new Link
{
Title = "chrome://about",
Description = "Chrome Secrets - Only works on a Chrome browser"
},
-- chrome -> extensions -> seo, honey, buffer, scrum for trello https://chrome.google.com/webstore/detail/scrum-for-trello/jdbcdblgjdpmfninkoogcfpnkjmndgje
------------- email tracker https://chrome.google.com/webstore/detail/email-tracker/bnompdfnhdbgdaoanapncknhmckenfog
COMING SOON! Hit the Heart Button if you want me to finish this blog next!
-- all the slack channels that we use and for what?
-- slack C# code integration webhooks
-- #bots github / azure
-- #alerts errors / downtime
-- IFTTT/Zapier
-- our 10 app integrations
COMING SOON! Hit the Heart Button if you want me to finish this blog next!
-- w3c automate validation
-- HOW TO SHOW RESULTS??
-- ? https://docs.microsoft.com/en-us/azure/devops/pipelines/process/conditions?view=azure-devops&tabs=yaml
-- ideal ultimate Digital Marketing / SEO pipeline 0- build, test, database updates, monkeytestit (can do via slack?), automate an seo audit, SLACK/EMAIL updates,
-- unit/connectivity/integration test the api, update sitemap, submit new sitemap to google with sitemap api, anything else with google?, bing submit, slack notify with link to update!
-- automate your entire job!!! send out the audits? notifify if worse?
-- BONUS. Voice activate it? (link to IFTTT BLOG?)
-- https://medium.com/@vnqmai.hcmue/deploy-asp-net-core-to-heroku-for-free-using-docker-bd6d6fc161ae
-- https://www.bing.com/webmasters/url-submission-api#APIs
honorable mention: docker, heroku, netlify, bitbucket, travis-ci, buddy, GitHub Actions
https://dev.to/alrobilliard/deploying-net-core-to-heroku-1lfe?utm_campaign=dotNET%20Weekly&utm_medium=email&utm_source=week-36_year-2020
i wonder which article is better? this one or the medium.com article above. because WE ONLY NEED 1!!
COMING SOON! Hit the Heart Button if you want me to finish this blog next!
-- argos + matrix tools monthly charts
-- Api Request to Google with OAuth Security (A.R.G.O.S.)
----------- https://console.developers.google.com/apis/library?project=matrixtools-argos
----------- https://console.developers.google.com/apis/credentials?project=matrixtools-argos api creds for argos
----------- https://developers.google.com/webmaster-tools/search-console-api-original/v3/searchanalytics/query?apix=true
----------- https://developers.google.com/api-client-library/dotnet/guide/aaa_oauth the others are actually to auth with your account / see SERVICE ACCOUNT example
----------- https://search.google.com/search-console/users?resource_id=sc-domain%3Amatrix.tools (remember to add SERVICE ACCOUNT to each property)
say something about the www.matrix.tools/portal dashboard and how we have custom dashboards! and a matrix tools monthly
share the code from google controller!! sc-domain:property and other gotchas.
- there are so many google auths, one is for public, oauth, this one, and single-sign-on google stuff too for sites (CONFUSING)
We also recommend using a "Website Copy" tool in order to pull down all of a website's images.
I prefer Cyotek on Windows and Wget on Mac and Linux.
Pro Tips: You can install and run Wget via the command line via
We will need Admin access or Credentials for each of the Client's Social Media platforms (these Credentials can usually be found in Bitrix).
Create a new local folder, something like /Desktop/MatrixTools/SocialMedia/{client-name}.
Now you can use your "Website Copy" tool to copy their website down into your new folder structure.
If you would like, you can delete everything except for the IMAGES!
Log into Buffer.
Add any new Social Media platforms to Buffer.
Now if you would like, you can log into Buffer and manually post an image to each platform every day.
If you do not want to do this every day, you can schedule the "Daily - Client Images" in advance via Buffer's scheduler.
Within the Image Posts, remember to add a link to the Client's Website and/or a friendly message.
You might want to move or delete each image after you've uploaded it to Buffer. This should ensure that you do not post duplicates.
A.I. Generated Content
Log into Quuu.
Add the new Social Media account to Quuu (you can do this by Refreshing the Buffer connection).
Pro Tips: Initially, set this to Manual so we can approve the content for the first couple of days.
If all of the A.I. content is relevant and positive to the client, then we can come back and toggle this setting to put it on Auto-pilot!
This will auto generate content that will then be shared (thru Buffer) to Social Media platforms.
Availablity Zone = Different Data Centers broken out by Region (AZ for short and they always end with a LETTER, Regions end with a NUMBER)
IAM = Identity Management / Users and Roles at a Global Level
IAM Foundation = For Company Integration like Active Directory (SAML)
EC2 = Virtual Machine (default is Linux with a Firewalled / Dynamic / Public IP, if you need a Static IP create an "Elastic IP")
SSH into an EC2 = ssh -i {path.pem} ec2-user@{ip}
Security Groups = Control traffic rules to the EC2 (Firewall)
Time outs = Security Group configuration issue / Connect Refused = Application issue
Security Groups can reference other Security Groups, multiple instances, etc. (They are locked down to a Region)
Practice: Install Apache on an EC2
SSH into the EC2 (Linux) with user "root"
sudo yum update
sudo yum install -y httpd.x86_64
sudo systemctl start httpd.service
sudo systemctl enable httpd.service (restart the service on reboot)
Open Port 80 via Security Group
echo "Hello World from MatrixTools-EC2-01 - $(hostname -f)" > /var/www/html/index.html
Practice: Let's Automate That!
EC2 User Data can run scripts at first boot (bootstrapping)
EC2 - Launch Instance - Amazon Linux 2 AMI
Configure Instance Details - Advanced Details - User data
Paste in the "script" from above
It must start with the following line, preceeding any script:
#!/bin/bash
EC2 Instances - Launch Types
On-Demand
Pay for what you use
Highest cost but no upfront payment
No long-term commitment
Recommendation: for auto-scaling or short-term and un-interrupted workloads, wehre you can't predict how the application will behave
Reserved
Up to 75% discount compared to On-demand
Pay upfront for what you use with long-term commitment
Reservation period can be 1 or 3 years
Reserve a specific instance type
Recommendation: for steady state usage applications (think database)
Convertible Reserved
Up to 54% discount compared to On-demand
Can change the EC2 instance type
Scheduled Reserved
Only launch within time window you reserve
Spot
Best Discount: up to 90% compared to On-demand
You bid a price and get the instance as long as its under the price
Price varies based on offer and demand
Spot instances are reclaimed with a 2 minute notification warning when the spot price goes above your bid
Recommendation: for batch jobs, Big Data analysis, or workloads that are resilient to failures. NOT for critical jobs or databases
Dedicated Hosts
Physical dedicated EC2 server / Full control and visibility
You bid a price and get the instance as long as its under the price
Allocated for your account for a 3-year period reservation
More expensive
Recommendation: for complicated licensing / regulatory or compliance needs (must keep software and data on separate machine)
Dedicated Instances
Instances running on hardware that's dedicate to you
May share hardware with other instances in same account
No control over placement
EC2 Instances Types
R = applications that need a lot of RAM
C = applications that need a lot of CPU (databases)
M = applications that need balance (middle / medium)
I = applications that need good local I/O (databases)
G = applications that need good GPU (video rendering / machine learning)
T2/T3 = Burstable Instances ("burst credits")
T2/T3 Unlimited = Unlimited Bursts
EC2 AMIs (custom base images)
Pre-installed packages, settings, etc
AMI Storage lives on S3 (inexpensive, just remove old ones)
Public and sharable ones in the marketplace
Just right click on your service instance - Image - Create Image
Images - AMIs - right click - Launch (AMIs are region specific / cannot use same ID across regions)
FAQ: You can copy and share AMIs and make them public by changing the permissions. If you cannot copy an AMI a lot of times you can still Launch an instance of it and create your own AMI from that instance (billingProduct code issues)
EC2 Placement Group Strategies
Cluster - a low-latency group in a single Availability Zone (high performance / high risk)
Spread - across hardware and AZ (critical applications / low risk / limit 7 instances)
Partition - across partitions (hardware within a single AZ / compromise)
EC2 for Solution Architects
Billed by the second, t2.micro is free tier
Lock down port 22 except for needed SSH (chmod 0400)
Timeout issues - Security groups issues
Security Groups can reference other Security Groups
ELB (Load Balancer for EC2)
Classic / Application / Network
Built-in Health Check
Use the static hostname NOT the underlying IP
Cannot scale instantaneously - contact AWS for a "warm-up"
4xx errors are client, 5xx are application, 503 means no capacity or no targets
If LB cannot connect to application, check security groups
Application (Layer 7)
Load balancing to multiple machines (target groups)
Load balancing to multiple applications on same machine (containers)
Load balancing based on route in URL
Load balancing based on hostname in URL
Port Mapping feature to redirect to a Dynamic Port
Stickiness can be enabled at the target group level by cookies
Supports HTTP, HTTPS, and Websockets
The application servers don't see the IP of the client directly, but it is placed in a header (X-Forwarded-For, X-Forwarded-Port, X-Forwarded-Proto)
Network (Layer 4)
Load balancing for TCP
High performance ~ millions of requests per seconds
Support for static IP or elastic IP (1 per AZ)
Less latency ~ 100 ms (vs 400 ms for ALB)
Use if Extreme Performance is required
Can see the client IP directly
Public facing = must attach Elastic IP – can help whitelist by clients
More Load Balancer Notes
App LB provide a Static DNS name
Network Load Balancers expose a public static IP (and can work witih TCP)
Use STICKINESS if Session Data is important (client gets same instance via cookie)
0.0.0.0/0 means allow anyone from anywhere
Can also use a (LB) security group, so that traffic has to come from load balancer
Auto Scaling
Cooldown = period between each scale action
ASG = Auto Scaling Group
Default Termination Policy = AZ with most instances and then oldest config
ASG are free and also have auto-restart to KEEP running X number of instances!
Scale based on CloudWatch alarms (Custom Metrics with PutMetric API)
Automatically Register new instances to a load balancer
SNI = to specify SSL hostname they reach
ACM = AWS Certificate Manager (X.509 SSL/TLS server certificate)
Finally, I lost a lot of time on this good one. After building and deploying, I was having issues on my production server.
For some reason, this inprocess hosting model was added and seemed to screw everything up, I still want to see why that isn't working, but for now I've
switched it using a setting in the csproj file:
If you would like to setup a custom email alias for you or your company, please check out this link first:
Custom Email Aliases
This article will go over how to setup SMTP for an automatic email generator.
If I am doing this for a client company, I will first setup their Custom Email Alias (with the link above).
I will setup a wildcard for their domain pointing all emails to their inbox...
However, I will also create a "contact" alias pointing to my own inbox, that way I can setup the Email Alias myself and attach it to my own account.
I will verify this new "contact" email on my Gmail account and set it as NOT an alias.
I will also hardcode all "to" values to my email address.
I will then test their SMTP, test my code to generate an email from their domain, and test their contact form, before finally deleting the "contact" alias
which will switch everything back over to the client's inbox, and finally changing any "to" values in the code from my email address to their inbox.
That is really the main point of the article, however, I will also include some sample C# code on how I do this:
using (var SmtpServer = new SmtpClient("smtp.gmail.com"))
{
var mail = new MailMessage
{
From = from,
Subject = subject,
IsBodyHtml = true,
Body = htmlBody,
};
mail.To.Add(to);
// Adjust the Reply To
mail.ReplyToList.Clear();
mail.ReplyToList.Add(replyTo);
SmtpServer.Port = 587;
SmtpServer.UseDefaultCredentials = false;
SmtpServer.DeliveryMethod = SmtpDeliveryMethod.Network;
SmtpServer.Credentials = new NetworkCredential("your-email@gmail.com", Password);
SmtpServer.EnableSsl = true;
SmtpServer.Send(mail);
EmailRetryAttempts = 0;
return true;
}
COMING SOON! Hit the Heart Button if you want me to finish this blog next!
Free website generator.
Link to Matrix Tools. -> at first this should just go to contact form?
Link to jessebooth.info for an example.
any code from documentservice? api call for file upload.
use this code for resume generator?
The browser plug-in for Trello at http://scrumfortrello.com/ (free)
Simple Agile Metrics
1 month "Sprints" or cycles of work.
1 hour "Points" or approximate units of work.
Velocity = how many "Points" do we finish in a "Sprint".
In this example, our current Velocity is ~40.
Simple Agile Goals
Track who is working on what.
Track what is in progress and what is next, etc.
Figure out your Average Velocity so you can be predictable.
Have an estimation if things are going to be late.
Have an estimation if more work needs to be ready to be pulled into the current Sprint.
Award Team Bonuses for Great Velocity!?!?
Next, let's go over a simple layout for your Trello "Kanban" Board.
Trello Boards
Backlog - a card for everything you want to work on later.
Possible Columns:
Needs more information
Needs a meeting
Low priority ideas
Future marketing campaigns
Etc.
Sprint - a card for everything you want to work on now.
Trello Sprint Columns
NEXT SPRINT
This column isn't really part of the current Sprint, but I do recommend it for a couple of reasons.
In the image above, notice the title of the list contains "NoBurn". This is a trick for the ScrumForTrello.com plug-in.
It tells the plug-in, do not count this column towards the current Sprint's "Burn Down" or stats.
In other words, it's a way to keep it on the board and not count it towards the board's stats, because this is just a place holder for the next Sprint.
This column is used to prep for the next Sprint and also can be used to move work in or out of the current Sprint if the team seems to be finishing early or late.
Notice the title of this column also contains ~40. This is our "Velocity". It is here for a reminder of our current goal (based on previous Sprint's actual results.)
If Velocity is not reached for a Sprint, there is no penalty, but a bonus could be given if the number is surpassed by a lot! :)
Additionally, this column (and all columns) should be sorted top to bottom by Priority, so everyone gets an idea of what is coming next.
Every month, this list will be "rolled forward", by renaming it "CURRENT SPRINT" and then adding a new "NEXT SPRINT" list.
Finally, this helps to ensure that we do not plan for too much work in a Sprint, by keeping this column "estimated" and below the team's Velocity.
Which also brings us into Time Tracking and the ScrumForTrello plug-in.
Notice the big bold number(s) on each card (you can also see a total of these numbers on each list). These are the Estimated and Completed hours for each card (or within each list).
This can be added manually to a card by clicking on the card's title and adding (Estimated) numbers and/or [Completed] numbers.
If you install the ScrumForTrello plug-in, helper buttons will also appear every time you click on the title of a card so you do not have to add the numbers manually.
(The ScrumForTrello plug-in also "converts" these numbers from the card's title into actual fields on the card, so Time Tracking will make more much sense if you install the plug-in.)
You can also create a Trello power-up to add (?) and [0] to the card title, so people who do not have the plug-in can use the power-up button.
All of this and the "Burn Down" will come back at the end!
CURRENT SPRINT
Each month, this column should start out containing all the work that we hope to complete within the current month.
We also leave a reminder on this list title: "Agree on Time Tracking and Paycheck before moving foward!"
This just means that in this list, the card should have 3 things: An Assignee, an Estimate, and a Paycheck.
After those 3 details are added to a card, it is "free" to be moved forward by the Assignee.
IN PROGRESS
When an employee is actively working on a task, they should move their card to this list.
This is a great way to track what everyone is working on and also who isn't currently working on anything and could possibly be given a task.
BLOCKED
If a task has been started, but for some reason cannot be completed at this time, it should be moved to the Blocked column.
Leads, Managers, or Executives should be responsible for removing any and all blockers and then moving the card back to Current Sprint or In Progress list.
EXECUTIVE REVIEW
When a task is complete, the assignee should move the card to a Review column.
DONE
When a task is reviewed and the client has accepted the changes, we can then move the task to Done so that the Paycheck numbers can be added to Payroll.
Every month after Payroll and statistics are recorded, this column is archived and a new one is created.
Done is also a required word to contain in this column for the ScrumForTrello plug-in to work correctly.
Time Tracking and the "Burn Down"
If you have the plug-in installed, you should see a FIRE button on your Trello board to link you to the "Burn Down" chart (shown below).
ScrumForTrello will provide you with a lot of Sprint progress statistics here.
Scrolling down on the ScrumForTrello page, you can also quickly add estimates to each card instead of going one-by-one.
Moving cards to the Done column will automatically "complete" the hours for that card.
Additionally, to help the "Burn Down" chart and show real progress, for longer tasks remember to add [Actual] worked hours to the card at the end of the day.
Extra Trello Tips
Use labels to quickly mark different projects, and you can click on one of the labels to switch them into a more detailed mode.
Press 'Q' on a Trello board to show only cards that are ASSIGNED TO YOU. This mode also shows the number of cards in each list so you don't have to count cards.
COMING SOON! Hit the Heart Button if you want me to finish this blog next!
Website Development with Vue.js... Compare Angular vs React vs Vue...
Thinking an easy example would be a Black Jack best practice app. Put in 3 cards as fast as possible and it tells you the correct play.
Another good feature would be to go down the list of possible hands and QUIZ the user on what the best play would be.
Then give the correct move and at the end of like 10 or so random hands so a score.
The bonus here is that it is related to the card app that I made on React for practice.
COMING SOON! Hit the Heart Button if you want me to finish this blog next!
Website Development with React
.jsx
immutable
Show off the React card app
Also add the feature so it can toggle the MODE.
Add a link to it somewhere / deploy it
Also add react-cli cheatsheets!
Add to Mario-Shell!!
npx create-react-app deck-of-cards (i think)
Maybe a vue vs react vs angular blog!?!?!?!
Or relate it to the Vue app if Vue app ends up being card related for sure.
Create a VIDEO app in React???
AWS Certification Notes (Chapter 1: Servers and Load Balancers)
NOTES:
Availablity Zone = Different Data Centers broken out by Region (AZ for short and they always end with a LETTER, Regions end with a NUMBER)
IAM = Identity Management / Users and Roles at a Global Level
IAM Foundation = For Company Integration like Active Directory (SAML)
EC2 = Virtual Machine (default is Linux with a Firewalled / Dynamic / Public IP, if you need a Static IP create an "Elastic IP")
SSH into an EC2 = ssh -i {path.pem} ec2-user@{ip}
Security Groups = Control traffic rules to the EC2 (Firewall)
Time outs = Security Group configuration issue / Connect Refused = Application issue
Security Groups can reference other Security Groups, multiple instances, etc. (They are locked down to a Region)
Practice: Install Apache on an EC2
SSH into the EC2 (Linux) with user "root"
sudo yum update
sudo yum install -y httpd.x86_64
sudo systemctl start httpd.service
sudo systemctl enable httpd.service (restart the service on reboot)
Open Port 80 via Security Group
echo "Hello World from MatrixTools-EC2-01 - $(hostname -f)" > /var/www/html/index.html
Practice: Let's Automate That!
EC2 User Data can run scripts at first boot (bootstrapping)
EC2 - Launch Instance - Amazon Linux 2 AMI
Configure Instance Details - Advanced Details - User data
Paste in the "script" from above
It must start with the following line, preceeding any script:
#!/bin/bash
EC2 Instances - Launch Types
On-Demand
Pay for what you use
Highest cost but no upfront payment
No long-term commitment
Recommendation: for auto-scaling or short-term and un-interrupted workloads, wehre you can't predict how the application will behave
Reserved
Up to 75% discount compared to On-demand
Pay upfront for what you use with long-term commitment
Reservation period can be 1 or 3 years
Reserve a specific instance type
Recommendation: for steady state usage applications (think database)
Convertible Reserved
Up to 54% discount compared to On-demand
Can change the EC2 instance type
Scheduled Reserved
Only launch within time window you reserve
Spot
Best Discount: up to 90% compared to On-demand
You bid a price and get the instance as long as its under the price
Price varies based on offer and demand
Spot instances are reclaimed with a 2 minute notification warning when the spot price goes above your bid
Recommendation: for batch jobs, Big Data analysis, or workloads that are resilient to failures. NOT for critical jobs or databases
Dedicated Hosts
Physical dedicated EC2 server / Full control and visibility
You bid a price and get the instance as long as its under the price
Allocated for your account for a 3-year period reservation
More expensive
Recommendation: for complicated licensing / regulatory or compliance needs (must keep software and data on separate machine)
Dedicated Instances
Instances running on hardware that's dedicate to you
May share hardware with other instances in same account
No control over placement
EC2 Instances Types
R = applications that need a lot of RAM
C = applications that need a lot of CPU (databases)
M = applications that need balance (middle / medium)
I = applications that need good local I/O (databases)
G = applications that need good GPU (video rendering / machine learning)
T2/T3 = Burstable Instances ("burst credits")
T2/T3 Unlimited = Unlimited Bursts
EC2 AMIs (custom base images)
Pre-installed packages, settings, etc
AMI Storage lives on S3 (inexpensive, just remove old ones)
Public and sharable ones in the marketplace
Just right click on your service instance - Image - Create Image
Images - AMIs - right click - Launch (AMIs are region specific / cannot use same ID across regions)
FAQ: You can copy and share AMIs and make them public by changing the permissions. If you cannot copy an AMI a lot of times you can still Launch an instance of it and create your own AMI from that instance (billingProduct code issues)
EC2 Placement Group Strategies
Cluster - a low-latency group in a single Availability Zone (high performance / high risk)
Spread - across hardware and AZ (critical applications / low risk / limit 7 instances)
Partition - across partitions (hardware within a single AZ / compromise)
EC2 for Solution Architects
Billed by the second, t2.micro is free tier
Lock down port 22 except for needed SSH (chmod 0400)
Timeout issues - Security groups issues
Security Groups can reference other Security Groups
ELB (Load Balancer for EC2)
Classic / Application / Network
Built-in Health Check
Use the static hostname NOT the underlying IP
Cannot scale instantaneously - contact AWS for a "warm-up"
4xx errors are client, 5xx are application, 503 means no capacity or no targets
If LB cannot connect to application, check security groups
Application (Layer 7)
Load balancing to multiple machines (target groups)
Load balancing to multiple applications on same machine (containers)
Load balancing based on route in URL
Load balancing based on hostname in URL
Port Mapping feature to redirect to a Dynamic Port
Stickiness can be enabled at the target group level by cookies
Supports HTTP, HTTPS, and Websockets
The application servers don't see the IP of the client directly, but it is placed in a header (X-Forwarded-For, X-Forwarded-Port, X-Forwarded-Proto)
Network (Layer 4)
Load balancing for TCP
High performance ~ millions of requests per seconds
Support for static IP or elastic IP (1 per AZ)
Less latency ~ 100 ms (vs 400 ms for ALB)
Use if Extreme Performance is required
Can see the client IP directly
Public facing = must attach Elastic IP – can help whitelist by clients
More Load Balancer Notes
App LB provide a Static DNS name
Network Load Balancers expose a public static IP (and can work witih TCP)
Use STICKINESS if Session Data is important (client gets same instance via cookie)
0.0.0.0/0 means allow anyone from anywhere
Can also use a (LB) security group, so that traffic has to come from load balancer
Auto Scaling
Cooldown = period between each scale action
ASG = Auto Scaling Group
Default Termination Policy = AZ with most instances and then oldest config
ASG are free and also have auto-restart to KEEP running X number of instances!
Scale based on CloudWatch alarms (Custom Metrics with PutMetric API)
Automatically Register new instances to a load balancer
SNI = to specify SSL hostname they reach
ACM = AWS Certificate Manager (X.509 SSL/TLS server certificate)
SEO Case Study: Advanced SEO Boosting to the Top Spot!
Nashville Ink Tattoos
Difficulty Level: Very Hard
Easy Keyword (downtown nashville tattoo)
Effect in Ranking After 1 Month: 4 => N/A
Medium Keyword (nashville tattoo)
Effect in Ranking After 1 Month: 6 => N/A
Difficult Keyword (nashville tattoo shop)
Effect in Ranking After 1 Month: 11 => N/A
1.2k Monthly Visitors
Hello Everyone,
I'm super excited for the opportunity to work with such a well-known shop as Nashville Ink!
These posts will track the progress we make with this client, given they already have strong position on Google (and 1.2k visitors a month), we will aim to improve upon their numbers!
Wish me luck!
-Jesse Booth
Coming Soon!
Nashville Ink has been featured on Ink Master and has also been visited by Gale!
(grab images and pictures!)
Getting to work with Nashville Ink is a huge honor for us!
Play Sega Online for Free! Any & All Sega Games! You can even play 2-Player Mode games with a friend across the world!
First, download the emulator from this link:
Sega Online To play 1-Player, you can skip to Section C.
Section A: Hamachi Setup
Unless you've already done this, Download the zip package from the link above and extract the files.
Run the InstallHamachi.msi installer.
Sign up for a free LogMeIn Hamachi account. (There should be instructions for this at the end of said installer.)
Sign in and join an existing network. (The Network ID is: KupoKupoKupo) (The Password is not required, but it is: kupo)
If you have Hamachi setup correctly you should see something like this (Make sure there are no errors or warnings on KupoKupoKupo):
Section B: Network Setup
Test that both you and another player are correctly inside of the KupoKupoKupo Network. (You should be able to Chat and Ping, there is also a built in Diagnostics.)
If you're still having problems here, try disabling your computer's Firewall.
Section C: Fusion Sega Setup
Unless you've already done this, Download the zip package from the link above and extract the files.
Go into the Sega folder and run Fusion.exe.
Players must load a game file from File => Load *Game*
That should bring up a search box to find the chosen game, there is a Games folder inside of the Sega folder.
For 2-Player: File - Netplay - Join Netplay Game - Join on the name of the computer shown on Hamachi (Section A, in my example the computer is R2D2).
Section D: More Games
Click Here after you have it working to download any & all additional Sega games.
Save any downloaded games from the above website into your same Sega => Games folder and play them via your Fusion.exe.
COMING SOON! Hit the Heart Button if you want me to finish this blog next!
how to setup debug messges dotnet core
production error page etc
usually missing .well-known doesnt get deployed?, link to the other blog
how can i fix this?
add stdout in the webconfig
With all of those integrations you should be able to get feedback on every step into a Slack channel.
You will also notice that the GitHub -> Azure Pipelines integration will add an azure-pipelines.yml to the root folder of the project.
This is the file that I will use to wire up Monkey Test It.
Log into your Monkey Test It dashboard.
Get your API key and an example of how to use it.
Log into Azure.
Here you can use the Pipeline Editor to update your yml file.
You can use this or a Post Deployment Script to kick off the Monkey Test It API.
There are now also settings on the Website Deployment page in Visual Studio to configure a Continuous Delivery pipeline!
I wanted to share a simple and free method to add SSL to your website, so that it runs under HTTPS (HTTP Secure).
This is done for security reasons and will also give your site a boost on search engines.
First, you will go to SSL for Free.
This is a great site to create and manage your SSL certificates.
To verify your site, you can go to Manual Verification and download a text file to a .well-known/acme-challenge folder on your site.
After clicking the new file to verify the site, it will allow you to download a trusted SSL certificate for free and you will also get a private.key.
If you're having problems hitting the verification file, you might have to adjust the permissions on the .well-known folder or files.
If you're using .NET Core and still having problems, try this code in Startup.cs in the void Configure function:
Finally, with .NET Core deployments, I have found that there can be an issue deploying the .well-known directory,
so I found some additional code for the .csproj file to help with those issues:
If you're running on IIS, there is one additional step in order to convert these files to a .pfx file.
For this I recommend using this SSL converter site.
Now you will have a .pfx file with a private key. On Windows, you can just double click the file to add it to your local machine's certificate store.
Now in IIS, when you bind your site to port 443, your new certificate will appear in the SSL certificate drop down list!
You can read more about my IIS bindings strategy here.
I'd like to look into a way to automate renewing of certificates!
What sites and tools do you use?
This article will walk through making your first coding changes to a GitHub project.
As a prerequisite you must be invited to the Matrix Tools, LLC organization on GitHub.com.
You will sign up to GitHub using your @matrix.tools email address.
After you create a user, you must ask for permissions to be a Collaborator to the GitHub project.
Git is an incredibly popular technology for keeping up with code projects or "repositories" and is a great skill to have on a resume.
GitHub is the public website for Git repositories, that was recently acquired by Microsoft.
There are many ways to access a Git repository, I prefer using the command line, I even have a
custom Mario Shell command line,
but for brand new developers I would recommend downloading Git (from here)
as well as downloading Visual Studio Code.
Visual Studio Code is a lightweight but powerful source code editor which runs on your desktop, and it also automatically integrates with Git.
After downloading Visual Studio Code,
run the following commands (with your user information) from your command prompt to authenticate Git on your machine:
Now open up Visual Studio Code and press Shift + Ctrl + P to open up the Command Palette.
Here you will run the Git Clone command.
And then paste in the GitHub web URL.
It will then ask you to select a folder on your computer to add the code files, I usually use something like
C:\Dev
It should then prompt you to open the project:
Now we have everything we need to be able to code and commit changes to the project!!
Use the Visual Studio Code Explorer to go to the /website/index.html file to find your first small assignment.
Notice after you make changes to the index.html file, you should see a (1) over the Source Control icon, on the far left side of Visual Studio Code.
This indicates that you have modified 1 file. Now click on the Source Control icon.
Then you will click on the "Commit" checkmark button in order to update the file to GitHub.
It will then ask you to commit the files and add a commit message.
After that, there is just one more button to press to do the "Push" to GitHub.
On the very bottom left of Visual Studio Code you should now see a 1 next to an Up Arrow.
It will then prompt you for your GitHub username and password, and then you're done!
If you see the error: "Permission to MatrixToolsLLC/new-developers.git denied" you will need to remind me to add you as a Collaborator to the GitHub project.
You should now verify / test that everything worked as expected.
The "1 next to an Up Arrow" should now have disappeared from the the bottom left of Visual Studio Code.
Go to the index.html file on GitHub here, and verify your changes were pushed.
Open index.html in a browser on your machine and verify it looks correct.
On my machine this is: C:\Dev\new-developers\website\index.html
Now you're ready to roll and start contributing to coding projects!!
Going forward you should be able to make changes, commit, and push all from within Visual Studio Code
Let me know that you're completed or if you have any issues!
This is an easy tutorial explaining how to create a custom email address or alias.
First, you will need an existing Gmail account and you will need a method of forwarding emails to that address.
For example, I use a great site called Namesilo.com to register domains.
It has an Email Forwarding feature that allows me to chose any email for my domain and forward it to any Gmail account.
(In other words, the redyoshi.com account can forward any email *@redyoshi.com to my current Gmail inbox, or any other existing inbox.)
After your domain manager sets this up you should run a test by sending an email to your new address and verify that it shows up in your inbox.
Now you can receive emails from that address, but that's only half of the battle, we still need to be able to send emails from that address.
First, you'll need to create and copy/save an "App password" from this link.
Then you'll go to your Gmail settings - Accounts and Import - Send mail as - Add another email address.
For the SMTP server use: smtp.gmail.com
The Username is your Gmail email address, but instead of your Gmail password, use the new "App password" that you saved.
If you're still having problems logging into your Gmail account on this step, you probably need to make sure 2-step authentication is activated.
And you're done! Now you should be able to send and receive emails from your new address.
Just for a reference, I got this information from this support answer.
I will also show the answer below, however, the support answer fails to mention the very important trick about the "App password".
You can also reach out to Matrix Tools and they can do all of this for you, or if you have any questions just contact them at
contact@matrix.tools
Finally, by going to Gmail's settings, I recommed adding a custom signature for any new alias.
At Matrix Tools, we use the following template to ensure a consistent signature:
https://www.matrix.tools
check them out they are great!
Competive Advantages. Mission statement, core values
Tracking statistics on page hits and google tracking / google webmasters SEO + SSL + SSM packages available.
//jeb
1) START A SMALL BUSINESS!!!!!!!
GET YOUR LLC!
Our LLC cost $344
Matrix Tools used Swyft Filings.
The following is my strategy for Website IIS bindings.
There are basically 4 different addresses that all point to your website.
You have an http and an https, as well as a www. and a non-www. version.
You should be using https for security and SEO reasons (you can read more about how to setup SSL for free here), and then you will have to pick if you prefer the www. or non-www. version
as your main address. I use "https://www." for all my sites, which means that the other 3 addresses should all point to that one.
This is accomplished via IIS (Internet Information Services) bindings.
IIS bindings are used to bind a URL address to a website folder that exists on the machine.
This is how the machine knows what files to serve to the user when they visit your URL.
Below you will see IIS Site Bindings that enure all traffic to "https://www.redyoshi.com" is served the correct files for the RED Yoshi site.
"Require Server Name Indication" if you have multiple sites using SSL on your server.
The "Basic Settings" dialog box controls the location of those files.
Now we need to setup another IIS binding for the other 3 addresses and have it redirect to our https version.
Example:
Right click on Sites in IIS and click Add Website...
I will call this one "RedYoshiHttp301", and instead of pointing to the actual files, I will point to any other location. This location will be used to store the configuration.
(I like to create an 301 folder in the website for this.)
Now that RedYoshiHttp301 exists we will setup 301 Redirects for it, in order to redirect to the https version.
Click on the "HTTP Redirect" feature in IIS and setup a 301 like this:
Now click on the "URL Rewrite" for this new site and add 3 rules to improve SEO.
The 3 rules keep everything consistent https address, lowercase, and without the trailing / in the URL.
Hello Everyone,
In hopes to master/improve my processes, I plan on real-time documenting the exact steps it takes to reach the #1 Ranking on Google!
Wish me luck!
-Jesse Booth
Hello Everyone,
In hopes to master/improve my processes, I plan on real-time documenting the exact steps it takes to reach the #1 Ranking on Google!
Wish me luck!
-Jesse Booth
I hope this blog will serve as a way to share some of the BEST/FREE Business and Digital Marketing
tools and strategies, most of which you've probably never even heard of!
What topics are you interested in talking about?
What are you currently working on?
Thanks for stopping by!
Check out my portfolio site:
Jesse Booth Info