5BB NADEX 5 Minute Binary Options Expiration System ...
The best russian binary option signal service
What is the quickest solution to finding a 4 digit number asking only yes/no questions?
A friend and I were watching a Korean game show called "The Genius", and in it they had a particularly brilliant competitive maths game. The premise was fairly simple - Each contestant had to pick a four digit number. They then were allowed to ask questions to each other one after the other, or use a turn to guess what the opponent's number was. The only additional rule was that 0 was treated as even for the purposes of questioning. After watching this, my friend and I tried to come up with a solution to guarantee finding your opponent's number in the fewest possible questions, but it very quickly got extremely complex. However, we're both fairly sure that there's a clever mathematical answer to guarantee it in a low amount of questions. After working out the number you need to use a turn to guess, so once the number is worked out a +1 needs to be added. (This isn't important if you know the number, but I figure can be relevant if you can get down to say 3 potential options, since just guessing all 3 is as efficient as working out which one it is) The obvious first approach we tried was binary searching the numbers for each digit. With this method, each digit could be found in a maximum of 4 questions (10 to 5 to 3 to 2 to 1), so we knew that we needed to try and beat 16 questions. We then realised that if we treated the first two digits and the second two digits as two digit numbers, it would only take a maximum of 7 questions to find each digit pair (100 to 50 to 25 to 13 to 7 to 4 to 2 to 1), so we were down to 14 questions. Following the theme, we tested binary search on all 4 digits, but realised it would take 14 questions (10000 to 5000 to 2500 to 1250 to 625 to 313 to 157 to 79 to 40 to 20 to 10 to 5 to 3 to 2 to 1), resulting in 14. It was no more efficient than the two sets of digits, and was also harder to calculate. I then tried a set of 3 digits and a single, (1000 to 500 to 250 to 125 to 63 to 32 to 16 to 8 to 4 to 2 to 1) + 4 for the remaining digit, and again this was 14. I then proposed a different solution - Could we potentially get more information by adding the digits together? I tried it on a number he had picked, asking questions to do a binary search on the sum of pairs of digits. Assigning the letters abcd to the four digits, I worked out a+b, c+d, b+c, and a+d. I figured doing this would allow me to arrange the numbers correctly once I had crunched it out. Since the number was a 4 digit number, I knew the maximum the total for all four numbers could reach was 36. That meant each pair was a maximum of 18, and sets of pairs had to add up to no more than 36. I started off by binary searching the sum of the first two digits, which would take a maximum of 5 turns (18 to 9 to 5 to 3 to 2 to 1), and repeated for the second two. That would take 10 turns, but give me the sum of all digits, the sum of the first two, and the sum of the second two. At this point I was adamant that I could potentially figure out the number using this information alone, but I was unsuccessful. I was able to use logic to narrow down the possible values for the outer two and inner two numbers by figuring out the total of all four digits, figuring out the number pair combinations a/b and c/d could be to satisfy that, and then working out the potential pairs of values that the inner two and outer two numbers could add up to. In our example test, a+b was 5, c+d was 14. From this I knew the total was 19. The first pair of numbers had to be 0 and 5, 1 and 4, or 2 and 3. The second pair had to be 9 and 5, 8 and 6, or 7 and 7. Using the logic of adding the highest number of one pair to the highest number of the other pair and then cycling through the values, I worked out that the outer two numbers had to add up to 14, 13, 12, or 11, and the inner two numbers had to add up to 8, 7, 6, or 5. Binary searching these could be done in two searches each, bringing the total to... 14. :( This is where it turned a bit weird though - After doing some logic on the resulting numbers (the outer pair was 12 and the inner pair was 7), I came up with three potential answers that satisfied every single constraint. 5077 4168 3259 These three numbers are amazing. The first two digits add up to 5, the third and fourth add up to 14, the first and fourth add up to 12, and the second and third add up to 7. Unfortunately from here there was no choice but to guess all three, no amount of questioning could lower it from three to two questions. And so our final total was 17, no more efficient than just binary searching the numbers in the first place. And so, I ask you this - Is there a more efficient, human doable way to discover the four digit number than binary searching the first pair and second pair of digits? I feel like there has to be, but I'm not knowledgeable enough to know!
Once a year, this subreddit hosts a survey in order to get to know the community a little bit and in order to answer questions that are frequently asked here. Earlier this summer, several thousand of you participated in the 2020 Subreddit Demographic Survey. Only those participants who meet our wiki definition of being childfree's results were recorded and analysed. Of these people, multiple areas of your life were reviewed. They are separated as follows:
Child Status
General Demographics
Education Level
Career and Finances
Location
Religion and Spirituality
Sexual and Romantic Life
Childhood and Family Life
Sterilisation
Childfreedom
State of the Subreddit
2. Methodology
Our sample is redditors who saw that we had a survey currently active and were willing to complete the survey. A stickied post was used to advertise the survey to members.
3. Results
The raw data may be found via this link. 7305 people participated in the survey from July 2020 to October 2020. People who did not meet our wiki definition of being childfree were excluded from the survey. The results of 5134 responders, or 70.29% of those surveyed, were collated and analysed below. Percentages are derived from the respondents per question.
General Demographics
Age group
Age group
Participants
Percentage
18 or younger
309
6.02%
19 to 24
1388
27.05%
25 to 29
1435
27.96%
30 to 34
1089
21.22%
35 to 39
502
9.78%
40 to 44
223
4.35%
45 to 49
81
1.58%
50 to 54
58
1.13%
55 to 59
25
0.49%
60 to 64
13
0.25%
65 to 69
7
0.14%
70 to 74
2
0.04%
82.25% of the sub is under the age of 35.
Gender and Gender Identity
Age group
Participants #
Percentage
Agender
62
1.21%
Female
3747
73.04%
Male
1148
22.38%
Non-binary
173
3.37%
Sexual Orientation
Sexual Orientation
Participants #
Percentage
Asexual
379
7.39%
Bisexual
1177
22.93%
Heterosexual
2833
55.20%
Homosexual
264
5.14%
It's fluid
152
2.96%
Other
85
1.66%
Pansexual
242
4.72%
Birth Location
Because the list contains over 120 countries, we'll show the top 20 countries:
Country of birth
Participants #
Percentage
United States
2775
57.47%
United Kingdom
367
7.60%
Canada
346
7.17%
Australia
173
3.58%
Germany
105
2.17%
Netherlands
67
1.39%
India
63
1.30%
Poland
57
1.18%
France
47
0.97%
New Zealand
42
0.87%
Mexico
40
0.83%
Brazil
40
0.83%
Sweden
38
0.79%
Finland
31
0.64%
South Africa
30
0.62%
Denmark
28
0.58%
China
27
0.56%
Ireland
27
0.56%
Phillipines
24
0.50%
Russia
23
0.48%
90.08% of the participants were born in these countries. These participants would describe their current city, town or neighborhood as:
The top 10 industries our participants are working in are:
Industry
Participants #
Percentage
Information Technology
317
6.68%
Health Care
311
6.56%
Education - Teaching
209
4.41%
Engineering
203
4.28%
Retail
182
3.84%
Government
172
3.63%
Admin & Clerical
154
3.25%
Restaurant - Food Service
148
3.12%
Customer Service
129
2.72%
Design
127
2.68%
Note that "other", "I'm a student", "currently unemployed" and "I'm out of the work force for health or other reasons" have been disregarded for this part of the evaluation. Out of the 3729 participants active in the workforce, the majority (1824 or 48.91%) work between 40-50 hours per week with 997 or 26.74% working 30-40 hours weekly. 6.62% work 50 hours or more per week, and 17.73% less than 30 hours. 513 or 10.13% are engaged in managerial responsibilities (ranging from Jr. to Sr. Management). On a scale of 1 (lowest) to 10 (highest), the overwhelming majority (3340 or 70%) indicated that career plays a very important role in their lives, attributing a score of 7 and higher. 1065 participants decided not to disclose their income brackets. The remaining 4,849 are distributed as follows:
Income
Participants #
Percentage
$0 to $14,999
851
21.37%
$15,000 to $29,999
644
16.17%
$30,000 to $59,999
1331
33.42%
$60,000 to $89,999
673
16.90%
$90,000 to $119,999
253
6.35%
$120,000 to $149,999
114
2.86%
$150,000 to $179,999
51
1.28%
$180,000 to $209,999
25
0.63%
$210,000 to $239,999
9
0.23%
$240,000 to $269,999
10
0.25%
$270,000 to $299,999
7
0.18%
$300,000 or more
15
0.38%
87.85% earn under $90,000 USD a year. 65.82% of our childfree participants do not have a concrete retirement plan (savings, living will).
Religion and Spirituality
Faith Originally Raised In
There were more than 50 options of faith, so we aimed to show the top 10 most chosen beliefs.
Faith
Participants #
Percentage
Catholicism
1573
30.76%
None (≠ Atheism. Literally, no notion of spirituality or religion in the upbringing)
958
18.73%
Protestantism
920
17.99%
Other
431
8.43%
Atheism
318
6.22%
Agnosticism
254
4.97%
Anglicanism
186
3.64%
Judaism
77
1.51%
Hinduism
75
1.47%
Islam
71
1.39%
This top 10 amounts to 95.01% of the total participants.
Current Faith
There were more than 50 options of faith, so we aimed to show the top 10 most chosen beliefs:
Faith
Participants #
Percentage
Atheism
1849
36.23%
None (≠ Atheism. Literally, no notion of spirituality or religion currently)
1344
26.33%
Agnosticism
789
15.46%
Other
204
4.00%
Protestantism
159
3.12%
Paganism
131
2.57%
Spiritualism
101
1.98%
Catholicism
96
1.88%
Satanism
92
1.80%
Wicca
66
1.29%
This top 10 amounts to 94.65% of the participants.
Level of Current Religious Practice
Level
Participants #
Percentage
Wholly seculanon religious
3733
73.73%
Identify with religion, but don't practice strictly
557
11.00%
Lapsed/not serious/in name only
393
7.76%
Observant at home only
199
3.93%
Observant at home. Church/Temple/Mosque/etc. attendance
125
2.47%
Strictly observant, Church/Temple/Mosque/etc. attendance, religious practice/prayeworship impacting daily life
Single and dating around, but not looking for anything serious
213
4.15%
Single and dating around, looking for something serious
365
7.12%
Single and not looking
1324
25.81%
Widowed
5
0.10%
Childfree Partner
Is your partner childfree? If your partner wants children and/or has children of their own and/or are unsure about their position, please consider them "not childfree" for this question.
Partner
Participants #
Percentage
I don't have a partner
1922
37.56%
I have more than one partner and none are childfree
3
0.06%
I have more than one partner and some are childfree
35
0.68%
I have more than one partner and they are all childfree
50
0.98
No
474
9.26%
Yes
2633
51.46%
Dating a Single Parent
Would the childfree participants be willing to date a single parent?
Answer
Participants #
Percentage
No, I'm not interested in single parents and their ties to parenting life
4610
90.13%
Yes, but only if it's a short term arrangement of some sort
162
3.17%
Yes, whether for long term or short term, but with some conditions (must not have child custody, no kid talk, etc.), as long as I like them and long as we're compatible
199
3.89%
Yes, whether for long term or short term, with no conditions, as long as I like them and as long as we are compatible
144
2.82%
Childhood and Family Life
On a scale from 1 (very unhappy) to 10 (very happy), how would you rate your childhood? Figure 3 Of the 5125 childfree people who responded to the question, 67.06% have a pet or are heavily involved in the care of someone else's pet.
Sterilisation
Sterilisation Status
Sterilisation Status
Participants #
Percentage
No, I am not sterilised and, for medical, practical or other reasons, I do not need to be
869
16.96%
No. However, I've been approved for the procedure and I'm waiting for the date to arrive
86
1.68%
No. I am not sterilised and don't want to be
634
12.37%
No. I want to be sterilised but I have started looking for a doctorequested the procedure
594
11.59%
No. I want to be sterilised but I haven't started looking for a doctorequested the procedure yet
2317
45.21%
Yes. I am sterilised
625
12.20%
Age when starting doctor shopping or addressing issue with doctor. Percentages exclude those who do not want to be sterilised and who have not discussed sterilisation with their doctor.
Age group
Participants #
Percentage
18 or younger
207
12.62%
19 to 24
588
35.85%
25 to 29
510
31.10%
30 to 34
242
14.76%
35 to 39
77
4.70%
40 to 44
9
0.55%
45 to 49
5
0.30%
50 to 54
1
0.06%
55 or older
1
0.06%
Age at the time of sterilisation. Percentages exclude those who have not and do not want to be sterilised.
Age group
Participants #
Percentage
18 or younger
5
0.79%
19 to 24
123
19.34%
25 to 29
241
37.89%
30 to 34
168
26.42%
35 to 39
74
11.64%
40 to 44
19
2.99%
45 to 49
1
0.16%
50 to 54
2
0.31%
55 or older
3
0.47%
Elapsed time between requesting procedure and undergoing procedure. Percentages exclude those who have not and do not want to be sterilised.
Time
Participants #
Percentage
Less than 3 months
330
50.46%
Between 3 and 6 months
111
16.97%
Between 6 and 9 months
33
5.05%
Between 9 and 12 months
20
3.06%
Between 12 and 18 months
22
3.36%
Between 18 and 24 months
15
2.29%
Between 24 and 30 months
6
0.92%
Between 30 and 36 months
2
0.31%
Between 3 and 5 years
40
6.12%
Between 5 and 7 years
25
3.82%
More than 7 years
50
7.65%
How many doctors refused at first, before finding one who would accept?
Doctor #
Participants #
Percentage
None. The first doctor I asked said yes
604
71.73%
One. The second doctor I asked said yes
93
11.05%
Two. The third doctor I asked said yes
54
6.41%
Three. The fourth doctor I asked said yes
29
3.44%
Four. The fifth doctor I asked said yes
12
1.43%
Five. The sixth doctor I asked said yes
8
0.95%
Six. The seventh doctor I asked said yes
10
1.19%
Seven. The eighth doctor I asked said yes
4
0.48%
Eight. The ninth doctor I asked said yes
2
0.24%
I asked more than 10 doctors before finding one who said yes
26
3.09%
Childfreedom
Primary Reason to Not Have Children
Reason
Participants #
Percentage
Aversion towards children ("I don't like children")
1455
28.36%
Childhood trauma
135
2.63%
Current state of the world
110
2.14%
Environmental (including overpopulation)
158
3.08%
Eugenics ("I have 'bad genes'")
57
1.11%
Financial
175
3.41%
I already raised somebody else who isn't my child
83
1.62%
Lack of interest towards parenthood ("I don't want to raise children")
2293
44.69%
Maybe interested for parenthood, but not suited for parenthood
48
0.94%
Medical ("I have a condition that makes conceiving/bearing/birthing children difficult, dangerous or lethal")
65
1.27%
Other
68
1.33%
Philosophical / Moral (e.g. antinatalism)
193
3.76%
Tokophobia (aversion/fear of pregnancy and/or chidlbirth)
291
5.67%
95.50% of childfree people are pro-choice, however only 55.93% of childfree people support financial abortion.
I'm a student and my future job/career will heavily makes me interact with children on a daily basis
67
1.30%
I'm retired, but I used to have a job that heavily makes me interact with children on a daily basis
6
0.12%
I'm unemployed, but I used to have a job that heavily makes me interact with children on a daily basis
112
2.19%
No, I do not have a job that makes me heavily interact with children on a daily basis
4493
87.81%
Other
148
2.89%
Yes, I do have a job that heavily makes me interact with children on a daily basis
291
5.69%
4. Discussion
Child Status
This section solely existed to sift the childfree from the fencesitters and the non childfree in order to get answers only from the childfree. Childfree, as it is defined in the subreddit, is "I do not have children nor want to have them in any capacity (biological, adopted, fostered, step- or other) at any point in the future." 70.29% of participants actually identify as childfree, slightly up from the 2019 survey, where 68.5% of participants identified as childfree. This is suprising in reflection of the overall reputation of the subreddit across reddit, where the subreddit is often described as an "echo chamber".
General Demographics
The demographics remain largely consistent with the 2019 survey. However, the 2019 survey collected demographic responses from all participants in the survey, removing those who did not identify as childfree when querying subreddit specific questions, while the 2020 survey only collected responses from people who identified as childfree. This must be considered when comparing results. 82.25% of the participants are under 35, compared with 85% of the subreddit in the 2019 survey. A slight downward trend is noted compared over the last two years suggesting the userbase may be getting older on average. 73.04% of the subreddit identify as female, compared with 71.54% in the 2019 survey. Again, when compared with the 2019 survey, this suggests a slight increase in the number of members who identify as female. This is in contrast to the overall membership of Reddit, estimated at 74% male according to Reddit's Wikipedia page [https://en.wikipedia.org/wiki/Reddit#Users_and_moderators]. The ratio of members who identify as heterosexual remained consistent, from 54.89% in the 2019 survey to 55.20% in the 2020 survey. Ethnicity wise, 77% of members identified as primarily Caucasian, consistent with the 2019 results. While the ethnicities noted to be missing in the 2019 survey have been included in the 2020 survey, some users noted the difficulty of responding when fitting multiple ethnicities, and this will be addressed in the 2021 survey.
Education level
As it did in the 2019 survey, this section highlights the stereotype of childfree people as being well educated. 2.64% of participants did not complete high school, which is a slight decrease from the 2019 survey, where 4% of participants did not graduate high school. However, 6.02% of participants are under 18, compared with 8.22% in the 2019 survey. 55% of participants have a bachelors degree or higher, while an additional 23% have completed "some college or university". At the 2020 survey, the highest percentage of responses under the: What is your degree/major? question fell under "I don't have a degree or a major" (20.12%). Arts and Humanities, and Computer Science have overtaken Health Sciences and Engineering as the two most popular majors. However, the list of majors was pared down to general fields of study rather than highly specific degree majors to account for the significant diversity in majors studied by the childfree community, which may account for the different results.
Career and Finances
The highest percentage of participants at 21.61% listed themselves as trained professionals. One of the stereotypes of the childfree is of wealth. However this is not demonstrated in the survey results. 70.95% of participants earn under $60,000 USD per annum, while 87.85% earn under $90,000 per annum. 21.37% are earning under $15,000 per annum. 1065 participants, or 21.10% chose not to disclose this information. It is possible that this may have skewed the results if a significant proportion of these people were our high income earners, but impossible to explore. A majority of our participants work between 30 and 50 hours per week (75.65%) which is slightly increased from the 2019 survey, where 71.2% of participants worked between 30 and 50 hours per week.
Location
The location responses are largely similar to the 2019 survey with a majority of participants living in a suburban and urban area. 86.24% of participants in the 2020 survey live in urban and suburban regions, with 86.7% of participants living in urban and suburban regions in the 2019 survey. There is likely a multifactorial reason for this, encompassing the younger, educated skew of participants and the easier access to universities and employment, and the fact that a majority of the population worldwide localises to urban centres. There may be an element of increased progressive social viewpoints and identities in urban regions, however this would need to be explored further from a sociological perspective to draw any definitive conclusions. A majority of our participants (57.47%) were born in the USA. The United Kingdom (7.6%), Canada (7.17%), Australia (3.58%) and Germany (2.17%) encompass the next 4 most popular responses. This is largely consistent with the responses in the 2019 survey.
Religion and Spirituality
For the 2020 survey Christianity (the most popular result in 2019) was split into it's major denominations, Catholic, Protestant, Anglican, among others. This appears to be a linguistic/location difference that caused a lot of confusion among some participants. However, Catholicism at 30.76% remained the most popular choice for the religion participants were raised in. However, of our participant's current faith, Aetheism at 36.23% was the most popular choice. A majority of 78.02% listed their current religion as Aetheist, no religious or spiritual beliefs, or Agnostic. A majority of participants (61%) rated religion as "not at all influential" to the childfree choice. This is consistent with the 2019 survey where 62.8% rated religion as "not at all influential". Despite the high percentage of participants who identify as aetheist or agnostic, this does not appear to be related to or have an impact on the childfree choice.
Romantic and Sexual Life
60.19% of our participants are in a relationship at the time of the survey. This is consistent with the 2019 survey, where 60.7% of our participants were in a relationship. A notable proportion of our participants are listed as single and not looking (25.81%) which is consistent with the 2019 survey. Considering the frequent posts seeking dating advice as a childfree person, it is surprising that such a high proportion of the participants are not actively seeking out a relationship. Unsurprisingly 90.13% of our participants would not consider dating someone with children. 84% of participants with partners of some kind have at least one childfree partner. This is consistent with the often irreconcilable element of one party desiring children and the other wishing to abstain from having children.
Childhood and Family Life
Overall, the participants skew towards a happier childhood.
Sterilisation
While just under half of our participants wish to be sterilised, 45.21%, only 12.2% have been successful in achieving sterilisation. This is likely due to overarching resistance from the medical profession however other factors such as the logistical elements of surgery and the cost may also contribute. There is a slight increase from the percentage of participants sterilised in the 2019 survey (11.7%). 29.33% of participants do not wish to be or need to be sterilised suggesting a partial element of satisfaction from temporary birth control methods or non-necessity of contraception due to their current lifestyle practices. Participants who indicated that they do not wish to be sterilised or haven't achieved sterilisation were excluded from the percentages where necessary in this section. Of the participants who did achieve sterilisation, a majority began the search between 19 and 29, with the highest proportion being in the 19-24 age group (35.85%) This is a marked increase from the 2019 survey where 27.3% of people who started the search were between 19-24. This may be due to increased education about permanent contraception or possibly due to an increase in instability around world events. The majority of participants who sought out and were successful at achieving sterilisation, were however in the 25-29 age group (37.9%). This is consistent with the 2019 survey results. The time taken between seeking out sterilisation and achieving it continues to increase, with only 50.46% of participants achieving sterilisation in under 3 months. This is a decline from the number of participants who achieved sterilisation in 3 months in the 2019 survey (58.5%). A potential cause of this decrease is to Covid-19 shutdowns in the medical industry leading to an increase in procedure wait times. The proportion of participants who have had one or more doctors refuse to perform the procedure has stayed consistent between the two surveys.
Childfreedom
The main reasons for people choosing the childfree lifestyle are a lack of interest towards parenthood and an aversion towards children which is consistent with the 2019 survey. Of the people surveyed 67.06% are pet owners or involved in a pet's care, suggesting that this lack of interest towards parenthood does not necessarily mean a lack of interest in all forms of caretaking. The community skews towards a dislike of children overall which correlates well with the 87.81% of users choosing "no, I do not have, did not use to have and will not have a job that makes me heavily interact with children on a daily basis" in answer to, "do you have a job that heavily makes you interact with children on a daily basis?". This is an increase from the 2019 survey. A vast majority of the subreddit identifes as pro-choice (95.5%), a slight increase from the 2019 results. This is likely due to a high level of concern about bodily autonomy and forced birth/parenthood. However only 55.93% support financial abortion, aka for the non-pregnant person in a relationship to sever all financial and parental ties with a child. This is a marked decrease from the 2019 results, where 70% of participants supported financial abortion. Most of our users realised that did not want children young. 58.72% of participants knew they did not want children by the age of 18, with 95.37% of users realising this by age 30. This correlates well with the age distribution of participants. Despite this early realisation of our childfree stance, 80.59% of participants have been "bingoed" at some stage in their lives.
The Subreddit
Participants who identify as childfree were asked about their interaction with and preferences with regards to the subreddit at large. Participants who do not meet our definition of being childfree were excluded from these questions. By and large our participants were lurkers (72.32%). Our participants were divided on their favourite flairs with 38.92% selecting "I have no favourite". The next most favourite flair was "Rant", at 16.35%. Our participants were similarly divided on their least favourite flair, with 63.40% selecting "I have no least favourite". In light of these results the flairs on offer will remain as they have been through 2019. With regards to "lecturing" posts, this is defined as a post which seeks to re-educate the childfree on the practices, attitudes and values of the community, particularly with regards to attitudes towards parenting and children, whether at home or in the community. A commonly used descriptor is "tone policing". A small minority of the survey participants (3.36%) selected "yes" to allowing all lectures, however 33.54% responded "yes" to allowing polite, respectful lectures only. In addition, 45.10% of participants indicated that they were not sure if lectures should be allowed. Due to the ambiguity of responses, lectures will continue to be not allowed and removed. Many of our participants (36.87%) support the use of terms such as breeder, mombie/moo, daddict/duh on the subreddit, with a further 32.63% supporting use of these terms in context of bad parents only. This is a slight drop from the 2019 survey. In response to this use of the above and similar terms to describe parents remains permitted on this subreddit. However, we encourage users to keep the use of these terms to bad parents only. 44.33% of users support the use of terms to describe children such as crotchfruit on the subreddit, a drop from 55.3% last year. A further 25.80% of users supporting the use of this and similar terms in context of bad children only, an increase from 17.42% last year. In response to this use of the above and similar terms to describe children remains permitted on this subreddit. 69.17% of participants answered yes to allowing parents to post, provided they stay respectful. In response to this, parent posts will continue to be allowed on the subreddit. As for regret posts, which were to be revisited in this year's survey, only 9.5% of participants regarded them as their least favourite post. As such they will continue to stay allowed. 64% of participants support under 18's who are childfree participating in the subreddit with a further 19.59% allowing under 18's to post dependent on context. Therefore we will continue to allow under 18's that stay within the overall Reddit age requirement. There was divide among participants as to whether "newbie" questions should be removed. An even spread was noted among participants who selected remove and those who selected to leave them as is. We have therefore decided to leave them as is. 73.80% of users selected "yes, in their own post, with their own "Leisure" flair" to the question, "Should posts about pets, travel, jetskis, etc be allowed on the sub?" Therefore we will continue to allow these posts provided they are appropriately flaired.
5. Conclusion
Thank you to our participants who contributed to the survey. This has been an unusual and difficult year for many people. Stay safe, and stay childfree.
The pharmacy 2020 demographics survey results are here! There were 258 respondents this year. Please note that the numbers will not necessarily add up to 100%, since all questions were optional. Sorry in advance for the crappy Excel graphs. Location Most respondents hailed from the US (233; 90.3%), followed by Canada (10; 3.9%), United Kingdom (8; 3.1%), New Zealand (2; 0.8%), and 1 respondent each from Australia, Indonesia, Slovakia, Sweden, and Taiwan. Of the 233 Americans, the top 3 states were California (20; 8.6%), Pennsylvania (18; 7.7%), and Texas (18; 7.7%). The 10 Canadians were from Ontario (5; 50%), British Columbia (2; 20%), Alberta (1; 10%), Nova Scotia (1; 10%), and Quebec (1; 10%). Demographics Of the 258 respondents, 130 (50.4%) identified as female, 123 (47.7%) as male, and 3 (1.2%) as non-binary. Age distribution is shown in the below table. A few statistics: minimum 19, maximum 68, mean 29.0, median 28, mode 26. https://preview.redd.it/qxyxs2sj09c51.png?width=554&format=png&auto=webp&s=202bef88a53fa8596182435590ba9de8eb3646c9 In terms of race/ethnicity, the categories from most to least common were as follows: white (156; 60.5%), Asian (55; 21.3%), 2 or more races (11; 4.3%), black (9; 3.5%), Hispanic or Latino (8; 3.1%), Indian subcontinent (6; 2.3%), Arab (4; 1.6%), Native American or American Indian (2; 0.8%), and Armenian (1; 0.4%). General employment questions Of the 258 respondents, 169 (65.5%) were pharmacists, 55 (21.3%) were pharmacy students, 22 (8.5%) were non-pharmacist staff, and 8 (3.1%) were pre-pharmacy students. There were also 1 each of the following: corporate pharmacy compliance, pharmacy wholesaler, pharmacology student, and other healthcare professional. Most respondents (169; 65.5%) were employed full time (defined as > 30 hours/week), while 19 (7.4%) were employed part time. 49 respondents (19.0%) were full time students (not necessarily in pharmacy), 13 (5.0%) were unemployed, 4 (1.6%) worked outside of the field of pharmacy, 2 (0.8%) were self-employed, 1 (0.4%) was retired, and 1 (0.4%) was consulting/contracting. There was a nearly equal split between respondents working in suburban (99; 38.4%) vs. urban (97; 37.6%) locations, followed by 21 (8.1%) in rural locations and 15 (5.8%) working remotely (apologies - I should have made this question/response more clear, but based on a jump compared to last year's survey, I think people working from home temporarily due to COVID-19 may have chosen this option). A pie chart of primary place of employment is shown below, with the top 7 responses shown in the legend: community/retail (136; 52.7%), hospital including outpatient (48; 18.6%), pharmaceutical industry including CROs (11; 4.3%), mail ordespecialty/home infusion (9; 3.5%), unemployed (8; 3.1%), long-term care/hospice (8; 3.1%), and ambulatory care (5; 1.9%). Please note that the unemployed category includes non-working full time students. https://preview.redd.it/csyipt0hs9c51.png?width=297&format=png&auto=webp&s=3b91337feb634a61730ccfbdd09aa8a0fdda6d7a A small proportion (42; 16.3%) of respondents reported having a second job. Of these, the most common fields of employment were: hospital including outpatient (10; 23.8%), community/retail (8; 19.0%), and self employment/side hustle (7; 16.7%). Salary For the following charts, I only included those working full time. Below is a histogram for full time pharmacist salary worldwide, as well as a table showing some stats for global, US, and ex-US salaries. https://preview.redd.it/n16j31x1v9c51.png?width=447&format=png&auto=webp&s=624581f5b94c917c417ac39da92cf9eb4c77130c
Clinical Research & Development (including Clinical Operations)
1
Formulation
1
Marketing/Business Analytics
1
Medical Science Liaison
1
The breakdown by level was as follows: PharmD Fellow (3; 27.3%), Associate/Specialist (6; 54.5%), ManageSupervisor (1; 9.1%), Director (1; 9.1%). Five respondents had completed or were currently completing a fellowship. Four of these 5 provided their salaries during their fellowships, with an average of $50,000. Pharmacy and pre-pharmacy students There were 63 respondents (24.4%) who reported being pharmacy or pre-pharmacy students. Of these, the top 3 desired fields upon graduation were: hospital including residencies (16; 25.4%), undecided (13; 20.6%), and community/retail (11; 17.5%). These 63 students attended (or planned to attend) 45 different schools worldwide. The 5 most common schools reported were as follows: University of Toronto (3; 4.8%), Feik School of Pharmacy (2; 3.2%), Ohio State University (2; 3.2%), Temple University (2; 3.2%), and University of Colorado (2; 3.2%). The breakdown by year was as follows: undergraduate/pre-pharmacy (8; 12.7%), PY1 (4; 6.3%), PY2 (18; 28.6%), PY3 (16; 25.4%), and PY4 (13; 20.6%). Of the 13 PY4 students, 2 reported having a job lined up after graduation, both in community/retail. Most students (45; 71.4%) were working in a pharmacy setting while in school. Stats for the number of hours worked weekly were as follows: minimum 3; maximum 34; mean 15.8; median 15. The most common duties interns were authorized to perform at their jobs were counseling patients (38; 84.4%), administering immunizations (24; 53.3%), and product verification (17; 37.8%). Note that interns could choose more than 1 option. Of the 63 students, 36 (57.1%) reported that they would choose to attend pharmacy school again if they could go back in time, knowing what they know now. Sixteen students (25.4%) reported that they would decide on a different career path, and 5 (7.9%) were unsure. Following pharmacy school, some students were considering pursuing the following degrees (top 3 listed): MPH (6; 9.5%), MD (4; 6.3%), and MBA (3; 4.8%). Results from additional questions are shown in chart form below. https://preview.redd.it/mls7e2139ac51.png?width=480&format=png&auto=webp&s=5db3ec80fd6e1934c787941278b7b755ad802a45 https://preview.redd.it/p9p44ifm9ac51.png?width=480&format=png&auto=webp&s=faf04b54ed228cc0cf110d06ed27bfd524ba894f https://preview.redd.it/8p7qq205aac51.png?width=464&format=png&auto=webp&s=ae5d53c284cd86ff787498dad58c4d625ae2afb1 Pharmacists There were 169 pharmacists, from 91 different pharmacy schools. The most common alma maters were Rutgers University Ernest Mario School of Pharmacy (RU RAH RAH!!) with 6 respondents (3.6%), University of Pittsburgh with 5 respondents (3.0%), and the following 5 schools with 4 respondents each: Northeastern University, Ohio Northern University, University of Colorado, University of Georgia, and University of Kansas. Most pharmacists (152; 89.9%) were currently practicing pharmacy. Five (3.0%) had practiced in the past but were no longer practicing, and 10 (5.9%) had never practiced after graduating. Of those currently practicing pharmacy, the statistics on the number of years in practice were as follows: minimum 0.1; maximum 35; mean 4.8; and median 3. Nearly half of pharmacists (75; 49.3%) said they would choose a different career path if they could go back in time, knowing what they know now, while 71 pharmacists (46.7%) said they would still choose to pursue pharmacy. Local practice standards About half of pharmacists (84; 55.3%) reported administering (or being allowed to administer) many types of immunizations, while 3 (2.0%) reported that pharmacists were not allowed in their location. A further 63 pharmacists (41.4%) did not administer immunizations simply because it was not part of their job description (eg, hospital inpatient). Regarding therapeutic interchange for non-controlled prescriptions, 63 pharmacists (41.4%) reporting being authorized to update a prescription only after consulting the prescriber. An additional 43 pharmacists (28.3%) were allowed to update a prescription as long as the prescriber was notified afterwards (ie, without prior permission), and 8 pharmacists (5.3%) were allowed per institutional protocol or collaborative practice agreement. Twenty-four pharmacists (15.8%) reported that a new prescription would be required and that no updates by the pharmacist were allowed. For controlled prescriptions, 24 pharmacists (15.8%) reported being allowed to change any/all elements of the prescription following consultation with the prescriber, and 4 pharmacists (2.6%) were allowed per institutional protocol or collaborative practice agreement. Sixty-six pharmacists (43.4%) were allowed to change certain (but not all) elements, while 40 (26.3%) could not change any part of a controlled prescription and required the prescriber to issue a new one. Regarding pharmacist prescribing, most pharmacists (110; 72.4%) were not allowed to prescribe medications. Nineteen pharmacists (12.5%) could prescribe for certain health conditions, 3 (2.0%) could prescribe for any health condition, and 2 (1.3%) could prescribe per institutional protocol or collaborative practice agreement. Results from additional questions are shown in chart form below. https://preview.redd.it/9q4wjmmg3bc51.png?width=281&format=png&auto=webp&s=cf2ec43db13f3fcbe4cb398b1c39808389f54572 https://preview.redd.it/945u7beklac51.png?width=480&format=png&auto=webp&s=e74267ca8c2d56dd0c7fc42497df2f0d42f14a3a https://preview.redd.it/yyd7su4tlac51.png?width=480&format=png&auto=webp&s=86e12e31c5de3b91a615add5dd28055f881beddc https://preview.redd.it/tk2msh41mac51.png?width=480&format=png&auto=webp&s=c091747118370117d3ecf35a8e9bffd54ac02805 https://preview.redd.it/9njkd9vemac51.png?width=346&format=png&auto=webp&s=ffe54bfc9ae206295f7e81685a361357c14a625a https://preview.redd.it/mywjx5nwmac51.png?width=444&format=png&auto=webp&s=1eb695e764c2bf7c1ffbfddd947fc297eed4f8ea Pharmacy residents Of the 169 pharmacists, 31 (18.3%) had completed or were currently completing a pharmacy residency. Of those, there were 6 current PGY-1 residents and 1 current PGY-2 resident. Of the 24 pharmacists who had completed their PGY-1 residencies, most (18; 75%) did rotational programs without a specific focus. The remaining 6 pharmacists specialized in the following areas during their PGY-1: ambulatory care (2; 8.3%), community pharmacy (1; 4.2%), managed care (1; 4.2%), pediatrics (1; 4.2%), and pharmacotherapy (1; 4.2%). Stats on their PGY-1 salaries were as follows: minimum $33,000; maximum $60,000; mean $44,325; median $45,000. These PGY-1 residencies were done primarily in an urban setting (18; 75%), followed by suburban (3; 12.5%) and rural (2; 8.3%). Of the 11 pharmacists who had completed their PGY-2 residencies, the specialties included: ambulatory care (3; 27.3%), psychiatry (2; 18.2%), and 1 each of administration, critical care, emergency medicine, infectious disease, oncology, and pharmacotherapy (9.1% each). Stats on their PGY-2 salaries were as follows: minimum $35,000; maximum $51,000; mean $45,625; median $46,500. These PGY-2 residencies were done almost equally in urban (6; 54.5%) and suburban (5; 45.5%) settings. The 6 current PGY-1 residents had the following plans immediately following their PGY-1: inpatient staff pharmacist (2; 33.3%), PGY-2 residency (2; 33.3%), inpatient clinical specialty pharmacist (1; 16.7%), and non-practicing pharmacist (1; 16.7%). Of those who had completed their residencies, their roles immediately afterward are listed in the table below.
Role
Number of Respondents
Inpatient staff pharmacist
8
Inpatient clinical specialty pharmacist
6
Ambulatory care pharmacist
4
Unemployed
2
Outpatient pharmacist (eg, retail, mail order, long term care)
1
Stopped practicing but remained in the field of pharmacy (eg, industry)
1
Industry fellowship
1
Drug information pharmacist
1
Pharmacy organizations This question was directed toward American respondents. There were 96 respondents who reported being currently active members of an association, the most common of which were ASHP (39; 40.6%), APhA (38; 39.6%), and a local/state pharmacy association (29; 30.2%). There were 35 respondents who reported previously being members of an association, the most common of which were APhA (25; 71.4%), ASHP (15; 42.9%), and a local/state pharmacy association (13; 37.1%). Final comments Thanks again to everyone who took the survey, and especially those who provided feedback! I totally acknowledge that the survey is very US-centric, and for that I apologize. I did take some feedback from some people in this subreddit, but if anyone ex-US wants to provide feedback for any future surveys, I'm happy to speak with you offline about it. The same also goes for anyone in a "niche" field such as long-term care, ambulatory care, managed care, etc. I'm happy to add in new sections or questions for those fields - it's just that I have no idea what to ask, having no experience in those areas. There are probably a few questions whose answers aren't reflected here mainly because this is long enough already, but if you have any questions (eg, what's the average salary for a hospital pharmacist in a suburban area?), please feel free to ask! Thanks again!
I have a Conv-6 CNN inspired from VGG-19 for CIFAR-10 dataset which I am using with Data Augmentation using tf.Datagen flow() method. The code is as follows-
# Data preprocessing and cleaning: # input image dimensions img_rows, img_cols = 32, 32 # Load CIFAR-10 dataset- (X_train, y_train), (X_test, y_test) = tf.keras.datasets.cifar10.load_data() print("X_train.shape = {0}, y_train.shape = {1}".format(X_train.shape, y_train.shape)) print("X_test.shape = {0}, y_test.shape = {1}".format(X_test.shape, y_test.shape)) # X_train.shape = (50000, 32, 32, 3), y_train.shape = (50000, 1) # X_test.shape = (10000, 32, 32, 3), y_test.shape = (10000, 1) if tf.keras.backend.image_data_format() == 'channels_first': X_train = X_train.reshape(X_train.shape[0], 3, img_rows, img_cols) X_test = X_test.reshape(X_test.shape[0], 3, img_rows, img_cols) input_shape = (3, img_rows, img_cols) else: X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 3) X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 3) input_shape = (img_rows, img_cols, 3) print("\n'input_shape' which will be used = {0}\n".format(input_shape)) # 'input_shape' which will be used = (32, 32, 3) # Convert datasets to floating point types- X_train = X_train.astype('float32') X_test = X_test.astype('float32') # Normalize the training and testing datasets- X_train /= 255.0 X_test /= 255.0 # convert class vectors/target to binary class matrices or one-hot encoded values- y_train = tf.keras.utils.to_categorical(y_train, num_classes) y_test = tf.keras.utils.to_categorical(y_test, num_classes) print("\nDimensions of training and testing sets are:") print("X_train.shape = {0}, y_train.shape = {1}".format(X_train.shape, y_train.shape)) print("X_test.shape = {0}, y_test.shape = {1}".format(X_test.shape, y_test.shape)) # Dimensions of training and testing sets are: # X_train.shape = (50000, 32, 32, 3), y_train.shape = (50000, 10) # X_test.shape = (10000, 32, 32, 3), y_test.shape = (10000, 10) train_dataset_features = tf.data.Dataset.from_tensor_slices(X_train) train_dataset_labels = tf.data.Dataset.from_tensor_slices(y_train) test_dataset_features = tf.data.Dataset.from_tensor_slices(X_test) test_dataset_labels = tf.data.Dataset.from_tensor_slices(y_test) # Choose an optimizer and loss function for training- loss_fn = tf.keras.losses.CategoricalCrossentropy() optimizer = tf.keras.optimizers.Adam(lr = 0.0003) # Select metrics to measure the error & accuracy of model. # These metrics accumulate the values over epochs and then # print the overall result- train_loss = tf.keras.metrics.Mean(name = 'train_loss') train_accuracy = tf.keras.metrics.CategoricalAccuracy(name = 'train_accuracy') test_loss = tf.keras.metrics.Mean(name = 'test_loss') test_accuracy = tf.keras.metrics.CategoricalAccuracy(name = 'test_accuracy') # Example of using 'tf.keras.preprocessing.image import ImageDataGenerator class's - flow(x, y)': datagen = ImageDataGenerator( # featurewise_center=True, # featurewise_std_normalization=True, rotation_range = 90, width_shift_range = 0.1, height_shift_range = 0.1, horizontal_flip = True ) def conv6_cnn(): """ Function to define the architecture of a neural network model following Conv-6 architecture for CIFAR-10 dataset and using provided parameter which are used to prune the model. Conv-6 architecture- 64, 64, pool -- convolutional layers 128, 128, pool -- convolutional layers 256, 256, pool -- convolutional layers 256, 256, 10 -- fully connected layers Output: Returns designed and compiled neural network model """ l = tf.keras.layers model = Sequential() model.add( Conv2D( filters = 64, kernel_size = (3, 3), activation='relu', kernel_initializer = tf.initializers.GlorotNormal(), strides = (1, 1), padding = 'same', input_shape=(32, 32, 3) ) ) model.add( Conv2D( filters = 64, kernel_size = (3, 3), activation='relu', kernel_initializer = tf.initializers.GlorotNormal(), strides = (1, 1), padding = 'same' ) ) model.add( MaxPooling2D( pool_size = (2, 2), strides = (2, 2) ) ) model.add( Conv2D( filters = 128, kernel_size = (3, 3), activation='relu', kernel_initializer = tf.initializers.GlorotNormal(), strides = (1, 1), padding = 'same' ) ) model.add( Conv2D( filters = 128, kernel_size = (3, 3), activation='relu', kernel_initializer = tf.initializers.GlorotNormal(), strides = (1, 1), padding = 'same' ) ) model.add( MaxPooling2D( pool_size = (2, 2), strides = (2, 2) ) ) model.add( Conv2D( filters = 256, kernel_size = (3, 3), activation='relu', kernel_initializer = tf.initializers.GlorotNormal(), strides = (1, 1), padding = 'same' ) ) model.add( Conv2D( filters = 256, kernel_size = (3, 3), activation='relu', kernel_initializer = tf.initializers.GlorotNormal(), strides = (1, 1), padding = 'same' ) ) model.add( MaxPooling2D( pool_size = (2, 2), strides = (2, 2) ) ) model.add(Flatten()) model.add( Dense( units = 256, activation='relu', kernel_initializer = tf.initializers.GlorotNormal() ) ) model.add( Dense( units = 256, activation='relu', kernel_initializer = tf.initializers.GlorotNormal() ) ) model.add( Dense( units = 10, activation='softmax' ) ) # Compile pruned CNN- model.compile( loss=tf.keras.losses.categorical_crossentropy, # optimizer='adam', optimizer=tf.keras.optimizers.Adam(lr = 0.0003), metrics=['accuracy'] ) return model # Instantiate a new Conv-2 CNN model- orig_model = conv6_cnn() # Load weights from before having 92.55% sparsity- orig_model.load_weights("Conv_6_CIFAR10_Magnitude_Based_Winning_Ticket_Distribution_92.55423622890814.h5") # Create mask using winning ticket- # Use masks to preserve sparsity- # Instantiate a new neural network model for which, the mask is to be created, mask_model = conv6_cnn() # Load weights of PRUNED model- mask_model.set_weights(orig_model.get_weights()) # For each layer, for each weight which is 0, leave it, as is. # And for weights which survive the pruning,reinitialize it to ONE (1)- for wts in mask_model.trainable_weights: wts.assign(tf.where(tf.equal(wts, 0.), 0., 1.)) # User input parameters for Early Stopping in manual implementation- minimum_delta = 0.001 patience = 3 best_val_loss = 100 loc_patience = 0 # Initialize a new LeNet-300-100 model- winning_ticket_model = conv6_cnn() # Load weights of winning ticket- winning_ticket_model.set_weights(orig_model.get_weights()) # Define 'train_one_step()' and 'test_step()' functions here- u/tf.function def train_one_step(model, mask_model, optimizer, x, y): ''' Function to compute one step of gradient descent optimization ''' with tf.GradientTape() as tape: # Make predictions using defined model- y_pred = model(x) # Compute loss- loss = loss_fn(y, y_pred) # Compute gradients wrt defined loss and weights and biases- grads = tape.gradient(loss, model.trainable_variables) # type(grads) # list # List to hold element-wise multiplication between- # computed gradient and masks- grad_mask_mul = [] # Perform element-wise multiplication between computed gradients and masks- for grad_layer, mask in zip(grads, mask_model.trainable_weights): grad_mask_mul.append(tf.math.multiply(grad_layer, mask)) # Apply computed gradients to model's weights and biases- optimizer.apply_gradients(zip(grad_mask_mul, model.trainable_variables)) # Compute accuracy- train_loss(loss) train_accuracy(y, y_pred) return None u/tf.function def test_step(model, optimizer, data, labels): """ Function to test model performance on testing dataset """ predictions = model(data) t_loss = loss_fn(labels, predictions) test_loss(t_loss) test_accuracy(labels, predictions) return None curr_step = 0 for x, y in datagen.flow(X_train, y_train, batch_size = batch_size, shuffle = True): train_one_step(winning_ticket_model, mask_model, optimizer, x, y) # print("current step = ", curr_step) curr_step += 1 if curr_step >= X_train.shape[0] // batch_size: print("\nTerminating training (datagen.flow())") break
But the following code gives error:
for x_t, y_t in test_dataset: test_step(winning_ticket_model, optimizer, x_t, y_t)
> ValueError Traceback (most recent call > last) in > 1 for x_t, y_t in test_dataset: > -- 2 test_step(winning_ticket_model, optimizer, x_t, y_t) > > ~/.local/lib/python3.7/site-packages/tensorflow/python/eagedef_function.py > in __call__(self, *args, **kwds) > 578 xla_context.Exit() > 579 else: > 580 result = self._call(*args, **kwds) > 581 > 582 if tracing_count == self._get_tracing_count(): > > ~/.local/lib/python3.7/site-packages/tensorflow/python/eagedef_function.py > in _call(self, *args, **kwds) > 625 # This is the first call of __call__, so we have to initialize. > 626 initializers = [] > 627 self._initialize(args, kwds, add_initializers_to=initializers) > 628 finally: > 629 # At this point we know that the initialization is complete (or less > > ~/.local/lib/python3.7/site-packages/tensorflow/python/eagedef_function.py > in _initialize(self, args, kwds, add_initializers_to) > 504 self._concrete_stateful_fn = ( > 505 self._stateful_fn._get_concrete_function_internal_garbage_collected( > # pylint: disable=protected-access > 506 *args, **kwds)) > 507 > 508 def invalid_creator_scope(*unused_args, **unused_kwds): > > ~/.local/lib/python3.7/site-packages/tensorflow/python/eagefunction.py > in _get_concrete_function_internal_garbage_collected(self, *args, > **kwargs) 2444 args, kwargs = None, None 2445 with self._lock: > -> 2446 graph_function, _, _ = self._maybe_define_function(args, kwargs) 2447 return graph_function 2448 > > ~/.local/lib/python3.7/site-packages/tensorflow/python/eagefunction.py > in _maybe_define_function(self, args, kwargs) 2775 2776 > self._function_cache.missed.add(call_context_key) > -> 2777 graph_function = self._create_graph_function(args, kwargs) 2778 self._function_cache.primary[cache_key] = > graph_function 2779 return graph_function, args, kwargs > > ~/.local/lib/python3.7/site-packages/tensorflow/python/eagefunction.py > in _create_graph_function(self, args, kwargs, > override_flat_arg_shapes) 2665 arg_names=arg_names, > 2666 override_flat_arg_shapes=override_flat_arg_shapes, > -> 2667 capture_by_value=self._capture_by_value), 2668 self._function_attributes, 2669 # Tell the ConcreteFunction > to clean up its graph once it goes out of > > ~/.local/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py > in func_graph_from_py_func(name, python_func, args, kwargs, signature, > func_graph, autograph, autograph_options, add_control_dependencies, > arg_names, op_return_value, collections, capture_by_value, > override_flat_arg_shapes) > 979 _, original_func = tf_decorator.unwrap(python_func) > 980 > 981 func_outputs = python_func(*func_args, **func_kwargs) > 982 > 983 # invariant: `func_outputs` contains only Tensors, CompositeTensors, > > ~/.local/lib/python3.7/site-packages/tensorflow/python/eagedef_function.py > in wrapped_fn(*args, **kwds) > 439 # __wrapped__ allows AutoGraph to swap in a converted function. We give > 440 # the function a weak reference to itself to avoid a reference cycle. > 441 return weak_wrapped_fn().__wrapped__(*args, **kwds) > 442 weak_wrapped_fn = weakref.ref(wrapped_fn) > 443 > > ~/.local/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py > in wrapper(*args, **kwargs) > 966 except Exception as e: # pylint:disable=broad-except > 967 if hasattr(e, "ag_error_metadata"): > 968 raise e.ag_error_metadata.to_exception(e) > 969 else: > 970 raise > > ValueError: in user code: > > :45 test_step * > predictions = model(data) > /home/majumda.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py:886 > __call__ ** > self.name) > /home/majumda.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/input_spec.py:180 > assert_input_compatibility > str(x.shape.as_list())) > > ValueError: Input 0 of layer sequential_7 is incompatible with the layer: expected ndim=4, found ndim=3. Full shape received: [32, 32, 3] > >
I have a Conv-6 CNN inspired from VGG-19 for CIFAR-10 dataset which I am using with Data Augmentation using tf.Datagen flow() method. The code is as follows-
# Data preprocessing and cleaning: # input image dimensions img_rows, img_cols = 32, 32 # Load CIFAR-10 dataset- (X_train, y_train), (X_test, y_test) = tf.keras.datasets.cifar10.load_data() print("X_train.shape = {0}, y_train.shape = {1}".format(X_train.shape, y_train.shape)) print("X_test.shape = {0}, y_test.shape = {1}".format(X_test.shape, y_test.shape)) # X_train.shape = (50000, 32, 32, 3), y_train.shape = (50000, 1) # X_test.shape = (10000, 32, 32, 3), y_test.shape = (10000, 1) if tf.keras.backend.image_data_format() == 'channels_first': X_train = X_train.reshape(X_train.shape[0], 3, img_rows, img_cols) X_test = X_test.reshape(X_test.shape[0], 3, img_rows, img_cols) input_shape = (3, img_rows, img_cols) else: X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 3) X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 3) input_shape = (img_rows, img_cols, 3) print("\n'input_shape' which will be used = {0}\n".format(input_shape)) # 'input_shape' which will be used = (32, 32, 3) # Convert datasets to floating point types- X_train = X_train.astype('float32') X_test = X_test.astype('float32') # Normalize the training and testing datasets- X_train /= 255.0 X_test /= 255.0 # convert class vectors/target to binary class matrices or one-hot encoded values- y_train = tf.keras.utils.to_categorical(y_train, num_classes) y_test = tf.keras.utils.to_categorical(y_test, num_classes) print("\nDimensions of training and testing sets are:") print("X_train.shape = {0}, y_train.shape = {1}".format(X_train.shape, y_train.shape)) print("X_test.shape = {0}, y_test.shape = {1}".format(X_test.shape, y_test.shape)) # Dimensions of training and testing sets are: # X_train.shape = (50000, 32, 32, 3), y_train.shape = (50000, 10) # X_test.shape = (10000, 32, 32, 3), y_test.shape = (10000, 10) train_dataset_features = tf.data.Dataset.from_tensor_slices(X_train) train_dataset_labels = tf.data.Dataset.from_tensor_slices(y_train) test_dataset_features = tf.data.Dataset.from_tensor_slices(X_test) test_dataset_labels = tf.data.Dataset.from_tensor_slices(y_test) # Choose an optimizer and loss function for training- loss_fn = tf.keras.losses.CategoricalCrossentropy() optimizer = tf.keras.optimizers.Adam(lr = 0.0003) # Select metrics to measure the error & accuracy of model. # These metrics accumulate the values over epochs and then # print the overall result- train_loss = tf.keras.metrics.Mean(name = 'train_loss') train_accuracy = tf.keras.metrics.CategoricalAccuracy(name = 'train_accuracy') test_loss = tf.keras.metrics.Mean(name = 'test_loss') test_accuracy = tf.keras.metrics.CategoricalAccuracy(name = 'test_accuracy') # Example of using 'tf.keras.preprocessing.image import ImageDataGenerator class's - flow(x, y)': datagen = ImageDataGenerator( # featurewise_center=True, # featurewise_std_normalization=True, rotation_range = 90, width_shift_range = 0.1, height_shift_range = 0.1, horizontal_flip = True ) def conv6_cnn(): """ Function to define the architecture of a neural network model following Conv-6 architecture for CIFAR-10 dataset and using provided parameter which are used to prune the model. Conv-6 architecture- 64, 64, pool -- convolutional layers 128, 128, pool -- convolutional layers 256, 256, pool -- convolutional layers 256, 256, 10 -- fully connected layers Output: Returns designed and compiled neural network model """ l = tf.keras.layers model = Sequential() model.add( Conv2D( filters = 64, kernel_size = (3, 3), activation='relu', kernel_initializer = tf.initializers.GlorotNormal(), strides = (1, 1), padding = 'same', input_shape=(32, 32, 3) ) ) model.add( Conv2D( filters = 64, kernel_size = (3, 3), activation='relu', kernel_initializer = tf.initializers.GlorotNormal(), strides = (1, 1), padding = 'same' ) ) model.add( MaxPooling2D( pool_size = (2, 2), strides = (2, 2) ) ) model.add( Conv2D( filters = 128, kernel_size = (3, 3), activation='relu', kernel_initializer = tf.initializers.GlorotNormal(), strides = (1, 1), padding = 'same' ) ) model.add( Conv2D( filters = 128, kernel_size = (3, 3), activation='relu', kernel_initializer = tf.initializers.GlorotNormal(), strides = (1, 1), padding = 'same' ) ) model.add( MaxPooling2D( pool_size = (2, 2), strides = (2, 2) ) ) model.add( Conv2D( filters = 256, kernel_size = (3, 3), activation='relu', kernel_initializer = tf.initializers.GlorotNormal(), strides = (1, 1), padding = 'same' ) ) model.add( Conv2D( filters = 256, kernel_size = (3, 3), activation='relu', kernel_initializer = tf.initializers.GlorotNormal(), strides = (1, 1), padding = 'same' ) ) model.add( MaxPooling2D( pool_size = (2, 2), strides = (2, 2) ) ) model.add(Flatten()) model.add( Dense( units = 256, activation='relu', kernel_initializer = tf.initializers.GlorotNormal() ) ) model.add( Dense( units = 256, activation='relu', kernel_initializer = tf.initializers.GlorotNormal() ) ) model.add( Dense( units = 10, activation='softmax' ) ) # Compile pruned CNN- model.compile( loss=tf.keras.losses.categorical_crossentropy, # optimizer='adam', optimizer=tf.keras.optimizers.Adam(lr = 0.0003), metrics=['accuracy'] ) return model # Instantiate a new Conv-2 CNN model- orig_model = conv6_cnn() # Load weights from before having 92.55% sparsity- orig_model.load_weights("Conv_6_CIFAR10_Magnitude_Based_Winning_Ticket_Distribution_92.55423622890814.h5") # Create mask using winning ticket- # Use masks to preserve sparsity- # Instantiate a new neural network model for which, the mask is to be created, mask_model = conv6_cnn() # Load weights of PRUNED model- mask_model.set_weights(orig_model.get_weights()) # For each layer, for each weight which is 0, leave it, as is. # And for weights which survive the pruning,reinitialize it to ONE (1)- for wts in mask_model.trainable_weights: wts.assign(tf.where(tf.equal(wts, 0.), 0., 1.)) # User input parameters for Early Stopping in manual implementation- minimum_delta = 0.001 patience = 3 best_val_loss = 100 loc_patience = 0 # Initialize a new LeNet-300-100 model- winning_ticket_model = conv6_cnn() # Load weights of winning ticket- winning_ticket_model.set_weights(orig_model.get_weights()) # Define 'train_one_step()' and 'test_step()' functions here- u/tf.function def train_one_step(model, mask_model, optimizer, x, y): ''' Function to compute one step of gradient descent optimization ''' with tf.GradientTape() as tape: # Make predictions using defined model- y_pred = model(x) # Compute loss- loss = loss_fn(y, y_pred) # Compute gradients wrt defined loss and weights and biases- grads = tape.gradient(loss, model.trainable_variables) # type(grads) # list # List to hold element-wise multiplication between- # computed gradient and masks- grad_mask_mul = [] # Perform element-wise multiplication between computed gradients and masks- for grad_layer, mask in zip(grads, mask_model.trainable_weights): grad_mask_mul.append(tf.math.multiply(grad_layer, mask)) # Apply computed gradients to model's weights and biases- optimizer.apply_gradients(zip(grad_mask_mul, model.trainable_variables)) # Compute accuracy- train_loss(loss) train_accuracy(y, y_pred) return None u/tf.function def test_step(model, optimizer, data, labels): """ Function to test model performance on testing dataset """ predictions = model(data) t_loss = loss_fn(labels, predictions) test_loss(t_loss) test_accuracy(labels, predictions) return None curr_step = 0 for x, y in datagen.flow(X_train, y_train, batch_size = batch_size, shuffle = True): train_one_step(winning_ticket_model, mask_model, optimizer, x, y) # print("current step = ", curr_step) curr_step += 1 if curr_step >= X_train.shape[0] // batch_size: print("\nTerminating training (datagen.flow())") break
But the following code gives error:
for x_t, y_t in test_dataset: test_step(winning_ticket_model, optimizer, x_t, y_t)
> ValueError Traceback (most recent call > last) in > 1 for x_t, y_t in test_dataset: > -- 2 test_step(winning_ticket_model, optimizer, x_t, y_t) > > ~/.local/lib/python3.7/site-packages/tensorflow/python/eagedef_function.py > in __call__(self, *args, **kwds) > 578 xla_context.Exit() > 579 else: > 580 result = self._call(*args, **kwds) > 581 > 582 if tracing_count == self._get_tracing_count(): > > ~/.local/lib/python3.7/site-packages/tensorflow/python/eagedef_function.py > in _call(self, *args, **kwds) > 625 # This is the first call of __call__, so we have to initialize. > 626 initializers = [] > 627 self._initialize(args, kwds, add_initializers_to=initializers) > 628 finally: > 629 # At this point we know that the initialization is complete (or less > > ~/.local/lib/python3.7/site-packages/tensorflow/python/eagedef_function.py > in _initialize(self, args, kwds, add_initializers_to) > 504 self._concrete_stateful_fn = ( > 505 self._stateful_fn._get_concrete_function_internal_garbage_collected( > # pylint: disable=protected-access > 506 *args, **kwds)) > 507 > 508 def invalid_creator_scope(*unused_args, **unused_kwds): > > ~/.local/lib/python3.7/site-packages/tensorflow/python/eagefunction.py > in _get_concrete_function_internal_garbage_collected(self, *args, > **kwargs) 2444 args, kwargs = None, None 2445 with self._lock: > -> 2446 graph_function, _, _ = self._maybe_define_function(args, kwargs) 2447 return graph_function 2448 > > ~/.local/lib/python3.7/site-packages/tensorflow/python/eagefunction.py > in _maybe_define_function(self, args, kwargs) 2775 2776 > self._function_cache.missed.add(call_context_key) > -> 2777 graph_function = self._create_graph_function(args, kwargs) 2778 self._function_cache.primary[cache_key] = > graph_function 2779 return graph_function, args, kwargs > > ~/.local/lib/python3.7/site-packages/tensorflow/python/eagefunction.py > in _create_graph_function(self, args, kwargs, > override_flat_arg_shapes) 2665 arg_names=arg_names, > 2666 override_flat_arg_shapes=override_flat_arg_shapes, > -> 2667 capture_by_value=self._capture_by_value), 2668 self._function_attributes, 2669 # Tell the ConcreteFunction > to clean up its graph once it goes out of > > ~/.local/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py > in func_graph_from_py_func(name, python_func, args, kwargs, signature, > func_graph, autograph, autograph_options, add_control_dependencies, > arg_names, op_return_value, collections, capture_by_value, > override_flat_arg_shapes) > 979 _, original_func = tf_decorator.unwrap(python_func) > 980 > 981 func_outputs = python_func(*func_args, **func_kwargs) > 982 > 983 # invariant: `func_outputs` contains only Tensors, CompositeTensors, > > ~/.local/lib/python3.7/site-packages/tensorflow/python/eagedef_function.py > in wrapped_fn(*args, **kwds) > 439 # __wrapped__ allows AutoGraph to swap in a converted function. We give > 440 # the function a weak reference to itself to avoid a reference cycle. > 441 return weak_wrapped_fn().__wrapped__(*args, **kwds) > 442 weak_wrapped_fn = weakref.ref(wrapped_fn) > 443 > > ~/.local/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py > in wrapper(*args, **kwargs) > 966 except Exception as e: # pylint:disable=broad-except > 967 if hasattr(e, "ag_error_metadata"): > 968 raise e.ag_error_metadata.to_exception(e) > 969 else: > 970 raise > > ValueError: in user code: > > :45 test_step * > predictions = model(data) > /home/majumda.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py:886 > __call__ ** > self.name) > /home/majumda.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/input_spec.py:180 > assert_input_compatibility > str(x.shape.as_list())) > > ValueError: Input 0 of layer sequential_7 is incompatible with the layer: expected ndim=4, found ndim=3. Full shape received: [32, 32, 3] > >
I have a Conv-6 CNN inspired from VGG-19 for CIFAR-10 dataset which I am using with Data Augmentation using tf.Datagen flow() method. The code is as follows-
# Data preprocessing and cleaning: # input image dimensions img_rows, img_cols = 32, 32 # Load CIFAR-10 dataset- (X_train, y_train), (X_test, y_test) = tf.keras.datasets.cifar10.load_data() print("X_train.shape = {0}, y_train.shape = {1}".format(X_train.shape, y_train.shape)) print("X_test.shape = {0}, y_test.shape = {1}".format(X_test.shape, y_test.shape)) # X_train.shape = (50000, 32, 32, 3), y_train.shape = (50000, 1) # X_test.shape = (10000, 32, 32, 3), y_test.shape = (10000, 1) if tf.keras.backend.image_data_format() == 'channels_first': X_train = X_train.reshape(X_train.shape[0], 3, img_rows, img_cols) X_test = X_test.reshape(X_test.shape[0], 3, img_rows, img_cols) input_shape = (3, img_rows, img_cols) else: X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 3) X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 3) input_shape = (img_rows, img_cols, 3) print("\n'input_shape' which will be used = {0}\n".format(input_shape)) # 'input_shape' which will be used = (32, 32, 3) # Convert datasets to floating point types- X_train = X_train.astype('float32') X_test = X_test.astype('float32') # Normalize the training and testing datasets- X_train /= 255.0 X_test /= 255.0 # convert class vectors/target to binary class matrices or one-hot encoded values- y_train = tf.keras.utils.to_categorical(y_train, num_classes) y_test = tf.keras.utils.to_categorical(y_test, num_classes) print("\nDimensions of training and testing sets are:") print("X_train.shape = {0}, y_train.shape = {1}".format(X_train.shape, y_train.shape)) print("X_test.shape = {0}, y_test.shape = {1}".format(X_test.shape, y_test.shape)) # Dimensions of training and testing sets are: # X_train.shape = (50000, 32, 32, 3), y_train.shape = (50000, 10) # X_test.shape = (10000, 32, 32, 3), y_test.shape = (10000, 10) train_dataset_features = tf.data.Dataset.from_tensor_slices(X_train) train_dataset_labels = tf.data.Dataset.from_tensor_slices(y_train) test_dataset_features = tf.data.Dataset.from_tensor_slices(X_test) test_dataset_labels = tf.data.Dataset.from_tensor_slices(y_test) # Choose an optimizer and loss function for training- loss_fn = tf.keras.losses.CategoricalCrossentropy() optimizer = tf.keras.optimizers.Adam(lr = 0.0003) # Select metrics to measure the error & accuracy of model. # These metrics accumulate the values over epochs and then # print the overall result- train_loss = tf.keras.metrics.Mean(name = 'train_loss') train_accuracy = tf.keras.metrics.CategoricalAccuracy(name = 'train_accuracy') test_loss = tf.keras.metrics.Mean(name = 'test_loss') test_accuracy = tf.keras.metrics.CategoricalAccuracy(name = 'test_accuracy') # Example of using 'tf.keras.preprocessing.image import ImageDataGenerator class's - flow(x, y)': datagen = ImageDataGenerator( # featurewise_center=True, # featurewise_std_normalization=True, rotation_range = 90, width_shift_range = 0.1, height_shift_range = 0.1, horizontal_flip = True ) def conv6_cnn(): """ Function to define the architecture of a neural network model following Conv-6 architecture for CIFAR-10 dataset and using provided parameter which are used to prune the model. Conv-6 architecture- 64, 64, pool -- convolutional layers 128, 128, pool -- convolutional layers 256, 256, pool -- convolutional layers 256, 256, 10 -- fully connected layers Output: Returns designed and compiled neural network model """ l = tf.keras.layers model = Sequential() model.add( Conv2D( filters = 64, kernel_size = (3, 3), activation='relu', kernel_initializer = tf.initializers.GlorotNormal(), strides = (1, 1), padding = 'same', input_shape=(32, 32, 3) ) ) model.add( Conv2D( filters = 64, kernel_size = (3, 3), activation='relu', kernel_initializer = tf.initializers.GlorotNormal(), strides = (1, 1), padding = 'same' ) ) model.add( MaxPooling2D( pool_size = (2, 2), strides = (2, 2) ) ) model.add( Conv2D( filters = 128, kernel_size = (3, 3), activation='relu', kernel_initializer = tf.initializers.GlorotNormal(), strides = (1, 1), padding = 'same' ) ) model.add( Conv2D( filters = 128, kernel_size = (3, 3), activation='relu', kernel_initializer = tf.initializers.GlorotNormal(), strides = (1, 1), padding = 'same' ) ) model.add( MaxPooling2D( pool_size = (2, 2), strides = (2, 2) ) ) model.add( Conv2D( filters = 256, kernel_size = (3, 3), activation='relu', kernel_initializer = tf.initializers.GlorotNormal(), strides = (1, 1), padding = 'same' ) ) model.add( Conv2D( filters = 256, kernel_size = (3, 3), activation='relu', kernel_initializer = tf.initializers.GlorotNormal(), strides = (1, 1), padding = 'same' ) ) model.add( MaxPooling2D( pool_size = (2, 2), strides = (2, 2) ) ) model.add(Flatten()) model.add( Dense( units = 256, activation='relu', kernel_initializer = tf.initializers.GlorotNormal() ) ) model.add( Dense( units = 256, activation='relu', kernel_initializer = tf.initializers.GlorotNormal() ) ) model.add( Dense( units = 10, activation='softmax' ) ) # Compile pruned CNN- model.compile( loss=tf.keras.losses.categorical_crossentropy, # optimizer='adam', optimizer=tf.keras.optimizers.Adam(lr = 0.0003), metrics=['accuracy'] ) return model # Instantiate a new Conv-2 CNN model- orig_model = conv6_cnn() # Load weights from before having 92.55% sparsity- orig_model.load_weights("Conv_6_CIFAR10_Magnitude_Based_Winning_Ticket_Distribution_92.55423622890814.h5") # Create mask using winning ticket- # Use masks to preserve sparsity- # Instantiate a new neural network model for which, the mask is to be created, mask_model = conv6_cnn() # Load weights of PRUNED model- mask_model.set_weights(orig_model.get_weights()) # For each layer, for each weight which is 0, leave it, as is. # And for weights which survive the pruning,reinitialize it to ONE (1)- for wts in mask_model.trainable_weights: wts.assign(tf.where(tf.equal(wts, 0.), 0., 1.)) # User input parameters for Early Stopping in manual implementation- minimum_delta = 0.001 patience = 3 best_val_loss = 100 loc_patience = 0 # Initialize a new LeNet-300-100 model- winning_ticket_model = conv6_cnn() # Load weights of winning ticket- winning_ticket_model.set_weights(orig_model.get_weights()) # Define 'train_one_step()' and 'test_step()' functions here- u/tf.function def train_one_step(model, mask_model, optimizer, x, y): ''' Function to compute one step of gradient descent optimization ''' with tf.GradientTape() as tape: # Make predictions using defined model- y_pred = model(x) # Compute loss- loss = loss_fn(y, y_pred) # Compute gradients wrt defined loss and weights and biases- grads = tape.gradient(loss, model.trainable_variables) # type(grads) # list # List to hold element-wise multiplication between- # computed gradient and masks- grad_mask_mul = [] # Perform element-wise multiplication between computed gradients and masks- for grad_layer, mask in zip(grads, mask_model.trainable_weights): grad_mask_mul.append(tf.math.multiply(grad_layer, mask)) # Apply computed gradients to model's weights and biases- optimizer.apply_gradients(zip(grad_mask_mul, model.trainable_variables)) # Compute accuracy- train_loss(loss) train_accuracy(y, y_pred) return None u/tf.function def test_step(model, optimizer, data, labels): """ Function to test model performance on testing dataset """ predictions = model(data) t_loss = loss_fn(labels, predictions) test_loss(t_loss) test_accuracy(labels, predictions) return None curr_step = 0 for x, y in datagen.flow(X_train, y_train, batch_size = batch_size, shuffle = True): train_one_step(winning_ticket_model, mask_model, optimizer, x, y) # print("current step = ", curr_step) curr_step += 1 if curr_step >= X_train.shape[0] // batch_size: print("\nTerminating training (datagen.flow())") break
But the following code gives error:
for x_t, y_t in test_dataset: test_step(winning_ticket_model, optimizer, x_t, y_t)
> ValueError Traceback (most recent call > last) in > 1 for x_t, y_t in test_dataset: > -- 2 test_step(winning_ticket_model, optimizer, x_t, y_t) > > ~/.local/lib/python3.7/site-packages/tensorflow/python/eagedef_function.py > in __call__(self, *args, **kwds) > 578 xla_context.Exit() > 579 else: > 580 result = self._call(*args, **kwds) > 581 > 582 if tracing_count == self._get_tracing_count(): > > ~/.local/lib/python3.7/site-packages/tensorflow/python/eagedef_function.py > in _call(self, *args, **kwds) > 625 # This is the first call of __call__, so we have to initialize. > 626 initializers = [] > 627 self._initialize(args, kwds, add_initializers_to=initializers) > 628 finally: > 629 # At this point we know that the initialization is complete (or less > > ~/.local/lib/python3.7/site-packages/tensorflow/python/eagedef_function.py > in _initialize(self, args, kwds, add_initializers_to) > 504 self._concrete_stateful_fn = ( > 505 self._stateful_fn._get_concrete_function_internal_garbage_collected( > # pylint: disable=protected-access > 506 *args, **kwds)) > 507 > 508 def invalid_creator_scope(*unused_args, **unused_kwds): > > ~/.local/lib/python3.7/site-packages/tensorflow/python/eagefunction.py > in _get_concrete_function_internal_garbage_collected(self, *args, > **kwargs) 2444 args, kwargs = None, None 2445 with self._lock: > -> 2446 graph_function, _, _ = self._maybe_define_function(args, kwargs) 2447 return graph_function 2448 > > ~/.local/lib/python3.7/site-packages/tensorflow/python/eagefunction.py > in _maybe_define_function(self, args, kwargs) 2775 2776 > self._function_cache.missed.add(call_context_key) > -> 2777 graph_function = self._create_graph_function(args, kwargs) 2778 self._function_cache.primary[cache_key] = > graph_function 2779 return graph_function, args, kwargs > > ~/.local/lib/python3.7/site-packages/tensorflow/python/eagefunction.py > in _create_graph_function(self, args, kwargs, > override_flat_arg_shapes) 2665 arg_names=arg_names, > 2666 override_flat_arg_shapes=override_flat_arg_shapes, > -> 2667 capture_by_value=self._capture_by_value), 2668 self._function_attributes, 2669 # Tell the ConcreteFunction > to clean up its graph once it goes out of > > ~/.local/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py > in func_graph_from_py_func(name, python_func, args, kwargs, signature, > func_graph, autograph, autograph_options, add_control_dependencies, > arg_names, op_return_value, collections, capture_by_value, > override_flat_arg_shapes) > 979 _, original_func = tf_decorator.unwrap(python_func) > 980 > 981 func_outputs = python_func(*func_args, **func_kwargs) > 982 > 983 # invariant: `func_outputs` contains only Tensors, CompositeTensors, > > ~/.local/lib/python3.7/site-packages/tensorflow/python/eagedef_function.py > in wrapped_fn(*args, **kwds) > 439 # __wrapped__ allows AutoGraph to swap in a converted function. We give > 440 # the function a weak reference to itself to avoid a reference cycle. > 441 return weak_wrapped_fn().__wrapped__(*args, **kwds) > 442 weak_wrapped_fn = weakref.ref(wrapped_fn) > 443 > > ~/.local/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py > in wrapper(*args, **kwargs) > 966 except Exception as e: # pylint:disable=broad-except > 967 if hasattr(e, "ag_error_metadata"): > 968 raise e.ag_error_metadata.to_exception(e) > 969 else: > 970 raise > > ValueError: in user code: > > :45 test_step * > predictions = model(data) > /home/majumda.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py:886 > __call__ ** > self.name) > /home/majumda.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/input_spec.py:180 > assert_input_compatibility > str(x.shape.as_list())) > > ValueError: Input 0 of layer sequential_7 is incompatible with the layer: expected ndim=4, found ndim=3. Full shape received: [32, 32, 3] > >
Wall Street Week Ahead for the trading week beginning December 9th, 2019
Good Saturday morning to all of you here on wallstreetbets. I hope everyone on this sub made out pretty nicely in the market this past week, and is ready for the new trading week ahead. Here is everything you need to know to get you ready for the trading week beginning December 9th, 2019.
What Trump does before trade deadline is the ‘wild card’ that will drive markets in the week ahead - (Source)
The Trump administration’s Dec. 15 deadline for new tariffs on China looms large, and while most strategists expect them to be delayed while talks continue, they don’t rule out the unexpected. “That’s the biggest thing in the room next week. I don’t think he’s going to raise them. I think they’ll find a reason,” said James Pauslen, chief investment strategist at Leuthold Group. But Paulsen said President Donald Trump’s unpredictable nature makes it really impossible to tell what will happen as the deadline nears. “He’s the one off you’re never sure about. It’s not just tariffs. It could be damn near anything,” Paulsen said. “I think he goes out of his way to be a wild card.” Just in the past week, Trump said he would put new tariffs on Brazil, Argentina and France. He rattled markets when he said he could wait until after the election for a trade deal with China. Once dubbing himself “tariff man,” Trump reminded markets that he sees tariffs as a way of getting what he wants from an opponent, and traders were reminded tariffs may be around for a long time. Trade certainly could be the most important event for markets in the week ahead, which also includes a Fed interest rate decision Wednesday and the U.K.’s election that could set the course for Brexit. If there’s no China deal, that could beat up stocks, send Treasury yields lower and send investors into other safe havens. When Fed officials meet this week, they are not expected to change interest rates, but they are likely to discuss whether they believe their repo operations to drive liquidity in the short-term funding market are running smoothly, ahead of year end. Economic reports in the coming week include CPI inflation Wednesday, which could be an important input for the Fed. Punt, but no deal As of Friday, the White House did not appear any closer to striking a deal with China, though officials say talks are going fine. Back in August, Trump said if there is no deal, Dec. 15 is the date for a new wave of tariffs on $156 billion in Chinese goods, including cell phones, toys and lap top computers. Dan Clifton, head of policy research at Strategas, said it seems like a low probability there will be a deal in the coming week. “What the market is focused on right now is whether there’s going to be tariffs that to into effect on Dec. 15, or not. It’s being rated pretty binary,” said Clifton. “I think what’s happening here and the actions by China overnight looks like we’re setting up for a kick.” China removed some tariffs from U.S. agricultural products Friday, and administration officials have been talking about discussions going fine. Clifton said if tariffs are put on hold, it’s unclear for how long. “Those are going to be larger questions that have to be answered. This is really now about politics. Is it a better idea for the president to cut a deal without major structural reforms, or should he walk away? That’s the larger debate that has to happen after Dec. 15,” Clifton said. “I’m getting worried that some in the administration... they’re leaning toward no deal category.” Clifton said Trump’s approval rating falls when the trade wars heat up, so that may motivate him to complete the deal with China even if he doesn’t get everything he wants. Michael Schumacher, director of rates strategy at Wells Fargo, said his base case is for a trade deal to be signed in the next couple of months, but even so, he said he can’t entirely rule out another outcome. It would make sense for tariffs to be put on hold while talks continue. “The tweeter-in-chief controls that one, ” said Schumacher. “That’s anybody’s guess...I wouldn’t be at all surprised if he suspends it for a few weeks. If he doesn’t, that’s a pretty unpleasant result. That’s risk off. That’s pretty clear.” Because the next group of tariffs would be on consumer goods, economists fear they could hit the economy through the consumer, the strongest and largest engine behind economic growth. Fed ahead The Fed has moved to the sidelines and says it is monitoring economic data before deciding its next move. Friday’s strong November jobs report, with 266,000 jobs added, reinforces the Fed’s decision to move to neutral for now. So the most important headlines from its meeting this week could be about the repo market, basically the plumbing for the financial system where financial institutions fund themselves. Interest rates in that somewhat obscure market spiked in September. Market pros said the issue was a cash crunch in the short term lending market, made better when the Fed started repo operations. The Fed now has multiple operations running over year end, and Schumacher said it has latitude to do more. Strategists expect there to be more pressure on the repo market as banks rein in operations to spruce up their balance sheets at year end. “No one is going to come to the Fed and say you did too much in the year-end funding,” said Schumacher. “If repo happens to spike somewhat on one day, the Fed is going to hammer it the next day.” Paulsen said the markets will be attuned to this week’s inflation numbers. Consumer inflation, the CPI is reported on Wednesday and producer prices are Thursday. A pickup in inflation of any significance is one thing that could pull the Fed from the sidelines, and prod it to consider a rate hike. “I think the inflation reports might start to get a little attention. Given the jobs numbers, the employment rate, growth picking up a little bit and a better tone in manufacturing. I do think if you get some hot CPI number, I don’t know if the Fed can ignore it,” he said. “Core CPI is 2.3%.” He said it would get noticed if it jumped to 2.5% or better. The Fed’s inflation target is 2% but its preferred measure is the PCE inflation, and that remains under 2%. Stocks were sharply higher Friday but ended the past week flattish. The S&P 500 was slightly higher, up 0.2% at 3,145, and the Dow was down 0.1% at 28,015. The Nasdaq was 0.1% lower, ending the week at 8,656.
This past week saw the following moves in the S&P:
It has been a rough start to the most wonderful month of them all, with the S&P 500 Index down each of the first two days of December. Don’t stop believing just yet, though. Everyone knows December has usually been a good month for stocks, but what happened last year is still fresh in the minds of many investors. The S&P 500 fell 9.1% in December 2018 for the worst December since 1931. That sounds really bad, until you realize stocks fell 30% in September 1931, but we digress. One major difference between now and last year is how well the global equities have been performing. Heading into December 2018, the S&P 500 was up 3.2% year to date, but markets outside of the United States were already firmly in the red, with many down double digits. “We don’t think stocks are on the verge of another massive December sell off,” said LPL Financial Senior Market Strategist Ryan Detrick. “If my Cincinnati Bengals can win a game, anything is possible. However, we are quite encouraged by the overall participation we are seeing from various global stock markets this year versus last year, when the United States was about the only market in the green heading into December.” Stocks have also overcome volatile starts to December recently. The S&P 500 was down four days in a row to start 2013 and 2017, but the gauge still managed to gain 2.4% and 1%, respectively, in those years. As the LPL Chart of the Day shows, December has been the second-best month of the year for stocks going back to 1950. It is worth noting that it was the best month of the year before last year’s massive drop. Stocks have historically been strong in pre-election years as well, and December has never been lower two times in a row during a pre-election year. Given stocks fell in December 2015, bulls could be smiling when this month is wrapped up.
Impeaching a President with the possibility of removal from office is by no means great for the country. However, it may not be so horrible for the stock market or investors if history is any guide. We first touched on this over two years ago here on the blog and now that much has transpired and the US House of Representatives is now proceeding with drafting articles of impeachment we figured it was a good time to revisit the history (albeit limited) of market behavior during presidential impeachment proceedings. The three charts below really tell the story. During the Watergate scandal of Nixon’s second term the market suffered a major bear market from January 1973 to OctobeDecember 1974 with the Dow down 45.1%, S&P 500 down 48.2% and NASDAQ down 59.9%. Sure there were other factors that contributed to the bear market such as the Oil Embargo, Arab-Israeli War, collapse of the Bretton Woods system, high inflation and Watergate. However, shortly after Nixon resigned on August 9, 1974 the market reached the secular bear market low on October 3 for S&P and NASDAQ and December 6 for the Dow. Leading up to the Clinton investigations and through his subsequent impeachment and the acquittal by the Senate the market was on a tear as one of the biggest bull markets in history raged on. After the 1994 midterm elections when the Republicans took back control of both houses of Congress the market remained on a 45 degree upward trajectory except for a few blips and the shortest bear market on record that lasted 45 days and bottomed on August 31, 1998. Clinton was impeached in December 1998 and acquitted in February 1999 as the market continued higher throughout his second term. Sure there were other factors that contributed to the late-1990s bull-run such as the Dotcom Boom, the Information Revolution, millennial fervor and a booming global economy, but Clinton’s personal scandal had little negative impact on markets. It remains to be seen of course what will happen with President Trump’s impeachment proceeding and how the world and markets react, but the market continues to march on. If the limited history of impeachment proceedings of a US President in modern times (no offense to our 17th President Andrew Johnson) is any guide, the market has bounced back after the last two impeachment proceedings and was higher a year later. Perhaps it will be better to buy any impeachment dip rather than sell it.
Typical December Trading: Modest Strength Early, Choppy Middle and Solid Gains Late
Historically, the first trading day of December, today, has a slightly bearish bias with S&P 500 advancing 34 times over the last 69 years (since 1950) with an average loss of 0.02%. Tomorrow, the second trading day of December however, has been stronger, up 52.2% of the time since 1950 with an average gain of 0.08% and the third day is better still, up 59.4% of the time. Over the more recent 21-year period, December has opened with strength and gains over its first seven trading days before beginning to drift. By mid-month all five indices have surrendered any early-month gains, but shortly thereafter Santa usually visits sending the market higher until the last day of the month and the year when last minute selling, most likely for tax reasons, briefly interrupts the market’s rally.
Odds Still Favor A Gain for Rest of December Despite Rough Start
Just when it was beginning to look like trade was heading in a positive direction, the wind changed direction again. Yesterday it was steel and aluminum tariffs on Brazil and Argentina and today a deal with China may not happen as soon as previously anticipated. The result was the worst first two trading days of December since last year and the sixth worst start since 1950 for S&P 500. DJIA and NASDAQ are eighth worst since 1950 and 1971, respectively. However, historically past weakness in early December (losses over the first two trading days combined) were still followed by average gains for the remainder of the month the majority of the time. DJIA has advanced 74.19% of the time following losses over the first two trading days with an average gain for the remainder of December of 1.39%. S&P 500 was up 67.65% of the time with an average rest of month gain of 0.84%. NASDAQ is modestly softer advancing 61.11% of the time during the remainder of December with an average advance of 0.30%.
Below are some of the notable companies coming out with earnings releases this upcoming trading week ahead which includes the date/time of release & consensus estimates courtesy of Earnings Whispers:
([CLICK HERE FOR FRIDAY'S PRE-MARKET EARNINGS TIME & ESTIMATES!]())
NONE.
Friday 12.13.19 After Market Close:
([CLICK HERE FOR FRIDAY'S AFTER-MARKET EARNINGS TIME & ESTIMATES!]())
NONE.
lululemon athletica inc. $229.38
lululemon athletica inc. (LULU) is confirmed to report earnings at approximately 4:05 PM ET on Wednesday, December 11, 2019. The consensus earnings estimate is $0.93 per share on revenue of $896.50 million and the Earnings Whisper ® number is $0.98 per share. Investor sentiment going into the company's earnings release has 73% expecting an earnings beat The company's guidance was for earnings of $0.90 to $0.92 per share on revenue of $880.00 million to $890.00 million. Consensus estimates are for year-over-year earnings growth of 24.00% with revenue increasing by 19.91%. Short interest has increased by 9.8% since the company's last earnings release while the stock has drifted higher by 16.0% from its open following the earnings release to be 26.0% above its 200 day moving average of $182.08. Overall earnings estimates have been revised higher since the company's last earnings release. On Friday, December 6, 2019 there was some notable buying of 927 contracts of the $260.00 call expiring on Friday, December 13, 2019. Option traders are pricing in a 8.3% move on earnings and the stock has averaged a 11.1% move in recent quarters.
Costco Wholesale Corp. (COST) is confirmed to report earnings at approximately 4:15 PM ET on Thursday, December 12, 2019. The consensus earnings estimate is $1.70 per share on revenue of $37.43 billion and the Earnings Whisper ® number is $1.74 per share. Investor sentiment going into the company's earnings release has 78% expecting an earnings beat. Consensus estimates are for year-over-year earnings growth of 5.59% with revenue increasing by 6.73%. Short interest has increased by 19.3% since the company's last earnings release while the stock has drifted higher by 2.5% from its open following the earnings release to be 10.3% above its 200 day moving average of $267.50. Overall earnings estimates have been revised higher since the company's last earnings release. On Tuesday, November 19, 2019 there was some notable buying of 916 contracts of the $265.00 put expiring on Friday, December 27, 2019. Option traders are pricing in a 3.7% move on earnings and the stock has averaged a 3.6% move in recent quarters.
Thor Industries, Inc. (THO) is confirmed to report earnings at approximately 6:45 AM ET on Monday, December 9, 2019. The consensus earnings estimate is $1.23 per share on revenue of $2.30 billion and the Earnings Whisper ® number is $1.30 per share. Investor sentiment going into the company's earnings release has 69% expecting an earnings beat. Consensus estimates are for earnings to decline year-over-year by 16.89% with revenue increasing by 30.98%. Short interest has increased by 48.1% since the company's last earnings release while the stock has drifted higher by 25.5% from its open following the earnings release to be 16.0% above its 200 day moving average of $58.44. Overall earnings estimates have been revised lower since the company's last earnings release. On Tuesday, December 3, 2019 there was some notable buying of 838 contracts of the $60.00 put expiring on Friday, December 20, 2019. Option traders are pricing in a 10.0% move on earnings and the stock has averaged a 7.6% move in recent quarters.
AutoZone, Inc. (AZO) is confirmed to report earnings at approximately 6:55 AM ET on Tuesday, December 10, 2019. The consensus earnings estimate is $13.69 per share on revenue of $2.76 billion and the Earnings Whisper ® number is $14.02 per share. Investor sentiment going into the company's earnings release has 76% expecting an earnings beat. Consensus estimates are for year-over-year earnings growth of 1.63% with revenue increasing by 4.48%. Short interest has decreased by 13.7% since the company's last earnings release while the stock has drifted higher by 1.1% from its open following the earnings release to be 8.9% above its 200 day moving average of $1,077.00. Overall earnings estimates have been revised lower since the company's last earnings release. Option traders are pricing in a 5.5% move on earnings and the stock has averaged a 5.6% move in recent quarters.
Adobe Inc. (ADBE) is confirmed to report earnings at approximately 4:05 PM ET on Thursday, December 12, 2019. The consensus earnings estimate is $2.26 per share on revenue of $2.97 billion and the Earnings Whisper ® number is $2.30 per share. Investor sentiment going into the company's earnings release has 74% expecting an earnings beat The company's guidance was for earnings of approximately $2.25 per share. Consensus estimates are for year-over-year earnings growth of 23.50% with revenue increasing by 20.51%. Short interest has increased by 44.6% since the company's last earnings release while the stock has drifted higher by 11.2% from its open following the earnings release to be 9.1% above its 200 day moving average of $280.60. Overall earnings estimates have been revised higher since the company's last earnings release. On Monday, November 25, 2019 there was some notable buying of 505 contracts of the $340.00 call expiring on Friday, December 20, 2019. Option traders are pricing in a 3.9% move on earnings and the stock has averaged a 3.8% move in recent quarters.
Broadcom Limited (AVGO) is confirmed to report earnings at approximately 4:15 PM ET on Thursday, December 12, 2019. The consensus earnings estimate is $5.36 per share on revenue of $5.76 billion and the Earnings Whisper ® number is $5.47 per share. Investor sentiment going into the company's earnings release has 69% expecting an earnings beat. Consensus estimates are for earnings to decline year-over-year by 7.27% with revenue increasing by 5.80%. Short interest has increased by 22.8% since the company's last earnings release while the stock has drifted higher by 6.2% from its open following the earnings release to be 9.7% above its 200 day moving average of $288.21. Overall earnings estimates have been revised lower since the company's last earnings release. On Thursday, December 5, 2019 there was some notable buying of 625 contracts of the $135.00 call expiring on Friday, January 15, 2021. Option traders are pricing in a 5.2% move on earnings and the stock has averaged a 4.7% move in recent quarters.
Ciena Corporation (CIEN) is confirmed to report earnings at approximately 7:00 AM ET on Thursday, December 12, 2019. The consensus earnings estimate is $0.66 per share on revenue of $964.80 million and the Earnings Whisper ® number is $0.67 per share. Investor sentiment going into the company's earnings release has 72% expecting an earnings beat The company's guidance was for revenue of $945.00 million to $975.00 million. Consensus estimates are for year-over-year earnings growth of 26.92% with revenue increasing by 7.28%. Short interest has increased by 66.6% since the company's last earnings release while the stock has drifted lower by 9.5% from its open following the earnings release to be 11.0% below its 200 day moving average of $39.32. Overall earnings estimates have been revised higher since the company's last earnings release. On Friday, December 6, 2019 there was some notable buying of 1,156 contracts of the $36.00 put expiring on Friday, December 13, 2019. Option traders are pricing in a 9.0% move on earnings and the stock has averaged a 10.1% move in recent quarters.
MongoDB, Inc. (MDB) is confirmed to report earnings at approximately 4:05 PM ET on Monday, December 9, 2019. The consensus estimate is for a loss of $0.28 per share on revenue of $99.73 million and the Earnings Whisper ® number is ($0.26) per share. Investor sentiment going into the company's earnings release has 63% expecting an earnings beat The company's guidance was for a loss of $0.29 to $0.27 per share on revenue of $98.00 million to $100.00 million. Consensus estimates are for year-over-year earnings growth of 15.15% with revenue increasing by 53.47%. Short interest has increased by 15.2% since the company's last earnings release while the stock has drifted lower by 16.3% from its open following the earnings release to be 5.1% below its 200 day moving average of $138.19. Overall earnings estimates have been revised lower since the company's last earnings release. On Tuesday, November 19, 2019 there was some notable buying of 970 contracts of the $210.00 call expiring on Friday, December 20, 2019. Option traders are pricing in a 10.1% move on earnings and the stock has averaged a 8.7% move in recent quarters.
Chewy, Inc. (CHWY) is confirmed to report earnings at approximately 4:10 PM ET on Monday, December 9, 2019. The consensus estimate is for a loss of $0.16 per share on revenue of $1.21 billion and the Earnings Whisper ® number is ($0.15) per share. Investor sentiment going into the company's earnings release has 57% expecting an earnings beat. Short interest has increased by 40.7% since the company's last earnings release while the stock has drifted lower by 14.6% from its open following the earnings release. Overall earnings estimates have been revised lower since the company's last earnings release. The stock has averaged a 6.4% move on earnings in recent quarters.
Stitch Fix, Inc. (SFIX) is confirmed to report earnings at approximately 4:05 PM ET on Monday, December 9, 2019. The consensus estimate is for a loss of $0.06 per share on revenue of $441.04 million and the Earnings Whisper ® number is ($0.04) per share. Investor sentiment going into the company's earnings release has 69% expecting an earnings beat The company's guidance was for revenue of $438.00 million to $442.00 million. Consensus estimates are for earnings to decline year-over-year by 160.00% with revenue increasing by 20.43%. Short interest has increased by 30.9% since the company's last earnings release while the stock has drifted higher by 41.7% from its open following the earnings release to be 2.4% below its 200 day moving average of $24.69. Overall earnings estimates have been revised lower since the company's last earnings release. On Thursday, November 21, 2019 there was some notable buying of 1,000 contracts of the $13.00 put expiring on Friday, January 17, 2020. Option traders are pricing in a 20.0% move on earnings and the stock has averaged a 18.9% move in recent quarters.
We have a very simple process, with the goal of making you money. You are sent exact signals that tell you the direction of the trade, the asset, and the expiry time to set. You can receive the signals on your PC, MAC, Phone or E-mail. Once you receive the Binary Strategy signal, you place the trade manually, and cash in. The image on the right shows you an example signal. USD/JPY Forecast: Firmer advance once above 106.25 Read more on https://www.fxstreet.com Source: www.fxstreet.com Trade stocks, ETFs, forex & Digital Options at IQ Option, one of the fastest growing online trading platforms. Sign up today and be a part of 17 million user base at IQ Option. Download our award-winning free online binary options trading software! Practice with a free demo account! Voted #1 in 28 Countries with 24/7 support! Trade; For Traders; About Us; en. Русский English 中文 ... Binary Spy $ 8.00; Binary Profit Sniper $ 9.50; Copy Binary Trades $ 6.00; Easy Profit Binary Option by Kishore M $ 8.00; Binary Genetic $ 10.00; Binary Options: Strategies for Directional and Volatility Trading $ 6.50; Binary Killer $ 6.00; Binary Bullion Bot $ 6.00; Binary Options Buddy $ 11.00; Binary Cash Bot $ 6.00; Binary Auto Profits $ 5 ... Binary Option prices can move significantly even when the underlying market has very low volatility, creating multiple trading opportunities even in quiet markets. Despite their potential volatility, binary contracts are designed to limit the risk to traders. There is a strict cap on the worst-case loss on any contract – each contract must settle at either $0 or $100. This offers the best of ... Binary.com is an award-winning online trading provider that helps its clients to trade on financial markets through binary options and CFDs. Trading binary options and CFDs on Synthetic Indices is classified as a gambling activity. Remember that gambling can be addictive – please play responsibly. Learn more about Responsible Trading. Some products are not available in all countries. This ... Binary.com Binary Option prices can move significantly even when the underlying market has very low volatility, creating multiple trading opportunities even in quiet markets. Despite their potential volatility, binary contracts are designed to limit the risk to traders. There is a strict cap on the worst-case loss on any contract – each contract must settle at either $0 or $100. This offers the best of ... Binary option candlestick strategy 1 minute snr; गोरखपुर ; उत्तराखंड. Opções binárias estratégia 60 cruzamento de médias; दिल्ली/NCR; Estudando candlestiks em opções binárias; Systems and indicators for binary options; हरियाणा; हिमाचल प्रदेश; Tips binary option beginner; Be nhin bien tap doc lop 2; मध 5BB NADEX 5 Minute Binary Options Expiration System for Daily Cash Flow Generation.... 5BB gives you an edge that is needed in the five-minute NADEX binary opti
Binary Options for Beginners - $10 to $3,500 - Newest Method 2019
10 Bumbest Mistakes as Beginner in Binary Options: 1. I almost quit 2. Not getting EDUCATION 3. Investing TOO early 4. Not being disciplined 5. Chose the wro... Binary Options for Beginners - $10 to $3,500 - Newest Method 2019 Do not miss! DEMO ACCOUNT: https://bit.ly/2Lq3NUt I want to kindly ask you to subscribe my channel, of course if you like my ... IQ OPTION STRATEGY 2020: 🔥 Candlesticks Analysis 🔥 Price Action Strategy 🔥 Binary Options Strategy 🔥 - Duration: 12:49. Noah Trading 2,210 views 12:49 iq optionyou can join Olymptrade trading ( bonus 100%)- https://bit.ly/307sNI4IQ Option is a leading financial trading platform. IQ option is so easy to handle... You could earn $500-$1,000 a day with my signals and strategies!!! All of my Strategies, Signals, and Trainings are on sale Right Now!!! Visit www.MyGoldenSi... The road to success through trading IQ option Best Bot Reviews Iq Option 2020 ,We make videos using this softwhere bot which aims to make it easier for you t... Binary Options Signals: Binary Options Strategy - Trading Options ★ TRY FOR FREE http://binares.com/bonus - [Free register on binary options] ★ REAL REGIST... A beginner or professional looking on how to trade online in 2018. And looking for a highly regulated broker to trade binary, forex, bitcoin, cryptocurrencie... Strategy Binary Option with EMA (6) RSI (14, 80, 25). This strategy can be used in Olymp Trade, IQ Option, Expert Option, etc. Just follow the trend of EMA, and you will profit. BINARY OPTIONS STRATEGY - 80% WINS 500$ in 10 minutes♛ POCKET OPTION - http://pocketopttion.com♛ BINOMO - https://qps.ru/2lUo5♛ TO RECEIVE BINARY OPTIONS SI...